Defaults in Semantics and Pragmatics

First published Fri Jun 30, 2006; substantive revision Wed May 18, 2022

‘Default’ can mean many different things in theories of meaning. It is so not only because of the multiplicity of approaches and dimensions from which meaning can be studied but also due to the fact the theoretical landscape is changing swiftly and dynamically. First, there is the dimension of the ongoing debates concerning the delimitation of explicit content (e.g., Jaszczolt 2009a, 2016a). Second, discussions about ‘defaultness’ are propelled by the debates concerning the literal/nonliteral vis-à-vis salient/nonsalient distinction (e.g., Giora & Givoni 2015; Ariel 2016). Next, defaultness plays an important role in computational linguistics that develops statistical models for learning compositional meaning using ‘big data’ (Jurafsky & Martin 2017 [Other Internet Resources]; Liang & Potts 2015). More recently, the ongoing ‘revolution’ in philosophy of language noticeable in the progressing departure from analytic philosophy has brought in the debates on socially expected, or socially and politically correct – and as such, standard, or default – meanings in the context of discussions on epistemic norms for assertion (e.g., Goldberg 2011, 2015), theories of argumentation (e.g., Macagno 2022), and, relatedly, commitment and accountability for one’s linguistic actions (e.g., Haugh 2013; Borg 2019). Defaults are also relevant in the discussions of the conventional import of lexical items such as expressives in that their standard expressive (often offensive) meaning may not arise in certain types of context; the view on their truth-evaluability (or at least on what aspects of their meaning are truth-evaluable) is then closely related to the value one attaches to this context-dependence (Potts 2007, 2012; Richard 2008; Geurts 2007).

All in all, the term ‘default meaning’ has been used in a variety of ways in the literature, including statistically common interpretation, predictable meaning, salient meaning, or automatically retrieved meaning. To begin with a common-sense definition, default interpretation of the speaker’s utterance is normally understood to mean salient meaning intended by the speaker, or presumed by the addressee to have been intended, and recovered (a) without the help of inference from the speaker’s intentions or (b) without any conscious inferential process whatsoever. Default interpretations, interpretations producing the standard content, are defined differently depending on how ‘default’ is defined: as a default for the lexical item, a default for the syntactic structure, a default for a particular construction, or even a default for a particular context (where, in addition, there is a necessary correlation with the adopted definition of ‘context’). The delimitation of such defaults can proceed according to different methods that, again, can affect the results, and as such further contribute to the definition of defaults. For example, the psychological route is associated with automatic, inference-free interpretations, while the statistical route appeals to quantitative analyses of data, where the latter can pertain to corpora of conversations or big databases of word co-occurrence as used in statistical, distributional approaches in computational semantics.

In what follows, I attend to such seminal conceptualisations of defaultness, their provenance, and their relative merits (Sections 1–3). Section 4 follows with some remarks on what can be called ‘dynamic defaultness’ in the theories and models of a joint construction (‘co-construction’) of meaning that are steadily gaining ground across different strands of pragmatic research (Lewis 1979; Asher and Lascarides 2013; Elder and Haugh 2018). Next, I briefly move to the role of defaults on the crossroads of philosophy of language with disciplines such as ethics, epistemology, and law (Section 5).

The overview of the major perspectives and debates makes it clear that there is no consensus in the literature as to the unique set of properties that default interpretations should exhibit, opening up the discussion as to whether the term has only an intra-theoretic utility. Next, of course, comes the question of the utility of the term and the utility of the concept – something I touch upon in the concluding section in the context of conceptual engineering.

1. Default Interpretations in Semantics and Pragmatics

1.1 Defaults, the Said, and the Unsaid

In post-Gricean pragmatics it has been accepted that communicators convey more information than is contained in the expressions they utter. For example, sentences (1a)–(2a) normally convey (1b)–(2b).

Tom finished writing a paper and went skating.
Tom finished writing a paper and then went skating.
Picasso’s painting is of a crying woman.
The painting executed by Picasso is of a crying woman.

Such additions to the content of the uttered sentence were called by Grice (1975) generalized conversational implicatures (GCIs), that is, instances of context-independent pragmatic inference. Subsequently, the status of such context-independent additions has become the subject of heated debates. Some post-Griceans stay close to Grice’s spirit and propose that there are salient, unmarked, presumed meanings that occur independently of context (Horn, e.g., 2004, 2012; Levinson 1995, 2000; Recanati 2004). Some identify default meanings as those arising automatically in a given situation of discourse (Jaszczolt, e.g., 2005, 2010, 2016b; Elder & Jaszczolt 2016). Others reject defaults tout court and subsume such salient meanings under a rather broad category of context-dependent pragmatic inference (Sperber & Wilson 1986; Carston 2002).

Next, some, following Grice, consider such pragmatic contributions to utterance meaning to be generalized conversational implicatures (Levinson), others classify them as pragmatic input to what is said, albeit using a variety of theory-specific labels (Recanati, Carston), reserving the term ‘implicature’ for meanings that can be represented by a separate logical form and that function independently from the content of the main utterance in reasoning. Others define them as contributions to primary meanings where the latter cut across the explicit/implicit divide (Jaszczolt). Yet another possibility is to regard them as a separate level of what is implicit in what is said (Bach 1994, 2007; Horn 2006). In short, the status of such ‘default’ meanings is still far from clear. However, at least in general terms, there is a reason for drawing a distinction between salient, automatic enrichments and costly pragmatic inference since some of these pragmatic contributions go through normally, unnoticed, as a matter of course. As Horn (2004: 4–5) puts it,

Whatever the theoretical status of the distinction, it is apparent that some implicatures are induced only in a special context (…), while others go through unless a special context is present ….

In the above, the differences in using the term ‘default’ consist of the acceptance or rejection of at least the following properties:

  1. cancellability (also known as defeasibility) of preferred interpretations;
  2. availability of preferred interpretations without making use of conscious inference;
  3. shorter time required for their formation by the speaker and recognition by the addressee as compared with that required for the meanings induced through inference;
  4. the availability of preferred interpretations prior to the completion of the processing of the entire proposition (local, pre-propositional defaults).

When analysed in standard truth-conditional semantics, defaults can contribute to the truth-conditional content or affect what is implicit – presupposed or implicated (see e.g., Potts 2015). The side on which we find defaults in this distinction is largely dictated by the orientation concerning the semantics/pragmatics boundary, where the choice ranges from traditional semantic minimalism to radical versions of contextualism. I discuss these in more detail in the following sections. But it has to be remembered that the category is tangential to such concepts as what is said, conversational implicature, conventional implicature, presupposition, or, to use a more general term, projective content (on universals in projective content see Tonhauser et al. 2013). For example, presuppositions are stronger than defaults: presupposition triggers such as ‘know’, ‘regret’, ‘again’ or ‘manage’ do not give the hearer much option of interpretation, save admitting some form of metalinguistic or quotative reading when these are negated as in (3).

I didn’t forget about your birthday again; it is the first time it happened.

What is said can rely on various types of defaults (Section 2) and contextually salient interpretations (Section 3), but likewise it can rely on effortful pragmatic inference from a variety of sources available in the situation of discourse. Relevant implicatures can be conventional (Grice 1975; Potts 2005) and conversational generalised, the latter understood either as grammar-driven (Chierchia 2004) or, more loosely, language-system-driven (Levinson 2000 and Section 1.3), but implicatures can also be entirely context-dependent (particularised). To add to this multi-dimensionality, context-dependent implicatures can on some occasions arise automatically, so when our definition of defaults relies on the definitional criterion of the automaticity of the process, as discussed above, then, by this definition, such implicatures can also be dubbed ‘defaults’ (Giora & Givoni 2015; Jaszczolt 2016a). In short, pursuing the standard route of analysing the types of content will not get us far with analysing defaults. Said/implicated, at-issue, or question-under-discussion-driven analyses (e.g., Roberts 2004) will encounter defaults on either side of the pertinent dichotomies.

A further complication in linking defaultness with the categories of the said or the unsaid is the fact that even weak implicatures or presuppositions adopted through accommodation can enjoy either status. In (4), we can accommodate globally the presupposition in (5) – either via inference or automatically.

Tom says that Ian hasn’t finished writing a novel.
Ian is writing a novel.

As a result, (5) can enjoy the default status according to some of the standard understandings of defaults as automatic, or more frequent, more salient, or even more ‘literal’ interpretations, or, alternatively, it can simply be an interpretation that is easier to process – arguably, in itself a plausible criterion for ‘defaultness’.

Next, conventional implicatures, that is lexical meanings that, according to Grice (1975), do not contribute to what is said, have, at first glance, less to do with defaultness: they are entrenched, non-cancellable and form-dependent (detachable), and they cannot be calculated from maxims, principles or heuristics (Horn 1988: 123). However, more recent inquiries into the related category of expressives gives more scope for pursuing defaultness. Slurs are, arguably, offensive by default but their derogatory import does not carry across to context of banter and camaraderie. As to whether the expressive content is an implicature or part of what is said, the matter is still hotly discussed (see e.g., Richard 2008; Sileo 2017). In what follows, I try to bring some order into this unwieldy term vis-à-vis the semantics/pragmatics distinction and finish with some reflections on its usefulness for semantics and pragmatics.

1.2 Default Reasoning

Be that as it may, default meanings come from default reasoning. According to Kent Bach (1984), in utterance interpretation we use ‘jumping to conclusions’, or ‘default reasoning’. In other words, speakers know when context-dependent inference from the content of the sentence is required and when it is not. When it is not required, they progress, unconsciously, to the first available and unchallenged alternative. This step is cancellable when it becomes obvious to the addressee that the resulting meaning is not what the speaker had intended. What is important in this view is the proposed distinction between (conscious) inference and the unconscious act of ‘taking a step’, as Bach (1984: 40) calls it, towards the enriched, default interpretation. Such a move to the default meaning is not preceded by a conscious act of deliberation as to whether this meaning was indeed intended by the speaker. Rather, it just goes through unless it is stopped by some contextual or other factors that render it implausible.

Bach founds his account on the Gricean theory of intentional communication and therefore he has a ready explanation for the fact that different meanings come with different salience. He makes an assumption that intentions allow for different degrees of strength (Bach 1987). He also adds that salience has a lot to do with standardisation (Bach 1995; 1998) which consists of interpreting an utterance according to a pattern that is established by previous usage and as such short-circuits the process of (conscious) inference. In short, ‘jumping to conclusions’ is performed unconsciously and effortlessly.

For Bach, such default meanings are neither implicatures nor what is said (or explicatures): they are implicit in what is said, or implicitures. They are a result of ‘fleshing out’ the meaning of the sentence in order to arrive at the intended proposition, or ‘filling in’ some conceptual gaps in the semantic representation that, only after this filling in, becomes a full proposition. An example of ‘fleshing out’ is given in (6b), where the minimal proposition is expanded. ‘Filling in’ is exemplified in (7b), where a so-called propositional radical is completed.

Tom is too young.
Tom is too young to drive a car.
Everybody likes philosophy.
Everybody who reads the SEP likes philosophy.

But default meanings do not exhaust the category membership of the impliciture: implicitures can be a result of default reasoning as well as a context-dependent process of inference. Analogous to the distinctions discussed before, default meanings are orthogonal to the distinction between what is said, impliciture, and implicature: the default/inferential distinction cuts across all three.

1.3 Presumptive Meanings and Cancellability

Stephen Levinson (1995, 2000) argues for default interpretations that he calls presumptive meanings and classifies as implicatures. He uses the term borrowed from Grice, generalized conversational implicatures (GCIs), but ascribes some properties to them that differentiate them from Grice’s GCIs. For Levinson, GCIs are neither properly semantic nor properly pragmatic. They should not be regarded as part of semantics as, for example, in Discourse Representation Theory (Kamp and Reyle 1993), nor should they be seen as a result of context-dependent inference performed by the hearer in the process of the recovery of the speaker’s intention. Instead, “they sit midway, systematically influencing grammar and semantics on the one hand and speaker-meaning on the other.” (Levinson 2000: 25).

Such presumed meanings are the result of rational, communicative behaviour and arise through three assumed heuristics: (1) ‘What isn’t said, isn’t’; (2) ‘What is expressed simply is stereotypically exemplified’, and (3) ‘What’s said in an abnormal way isn’t normal’, called Q, I, and M heuristics (principles) respectively. Levinson’s GCIs, unlike their Gricean progenitors, can arise at various stages in utterance processing: the hearer need not have processed the whole proposition before arriving at some presumed meanings. Also, unlike Grice’s GCIs that are taken to be speaker’s intended meanings, Levinson’s presumptive meanings seem to be hearer’s meanings, obtained by the hearer as a result of the assumptions he or she made in the process of utterance interpretation (see Saul 2002 and Horn 2006 for discussion). On the other hand, like Grice’s GCIs, they are cancellable without contradiction.

Now, when defaults are delimited by contextual salience, arguably, cancellation may not occur except for cases of miscommunication. In other words, when the meaning is salient in a given context, it is likely that it had been meant by the speaker unless the speaker misjudged the common ground. But when they are understood as language-system-driven meanings, à la Levinson’s GCIs, cancellability constitutes direct evidence of such defaultness. Salient components of meaning added to the overtly expressed content (in the form of additional information or choices of interpretation) tend to be entrenched and as such difficult to cancel. But as Jaszczolt (2009a, 2016a) demonstrates, cancellability is a property that does not side with implicit as opposed to explicit content but rather with salience. If the main intended message is communicated indirectly, as in (8b), then it is the implicature (8c) that is difficult to cancel.

(Fred and Wilma talking about Wilma’s piano recital)
Fred: Was the recital a success?
Wilma: Lots of people left before the end.
The recital was not a success.

The presence or absence of cancellation in utterance interpretation is still a matter of dispute. It is difficult at present to decide between the rival views (i) that a particular GCI arose and was subsequently cancelled or (ii) that it did not arise at all due to being blocked by the context. There is not sufficient experimental evidence to support either stance. The answer to this question is closely dependent on the answer to the so-called globalism-localism dispute. If, as Levinson claims, default interpretations arise ‘locally’, out of the processing of a pre-propositional unit such as a word or a phrase, then they have to be subjected to frequent cancellation once the proposition has been processed. If, however, despite the incrementality of the interpretation process they arise post-propositionally, or ‘globally’, in accordance with Grice’s original assumption, then utterance interpretation can proceed without costly backtracking (see Geurts 2009; Jaszczolt 2008, 2009a, 2016a; Noveck & Sperber 2004).

1.4 Rhetorical Structure Rules

Gricean pragmatics is not the only approach in which defaults are discussed. Defaults and nonmonotonic reasoning are also well entrenched in computational linguistics. Defaults are distinguished there with respect to various units of meaning, from morphemes and words to multisentential units (Asher & Lascarides 1995; Lascarides & Copestake 1998). In this section I focus on intersentential default links and in the next I place ‘glue logic’ in the context of some other understandings of defaults in computational semantics.

The tradition of defaults in nonmonotonic reasoning can be traced back to Humboldt, Jespersen and Cassirer, and more recently to Reiter’s (1980) default logic and his default rules of the form:


where C can be concluded if A has been concluded and B can be assumed (and not B cannot be proven). Such defaults can be built into standard logic:

It is just as valid to conclude ‘Presumably x is B’ from ‘x is A’ and ‘A’s are normally B’ as it is to conclude ‘x is B’ from ‘x is A’ and the ‘All A’s are B’. One does not have to set one’s mind to a different mode of reasoning to get the former. Veltman (1996: 257).

But the resulting logic will become nonmonotonic because there are default rules and default operators in the language. The literature on the topic is vast and is best considered as a separate topic from our current concern (see e.g., Thomason (1997) for an overview).

A good example of how default interpretations can be accounted for in formal semantic theory is Segmented Discourse Representation Theory (SDRT, e.g., Asher & Lascarides 2003). SDRT is an offshoot of Discourse Representation Theory, a dynamic semantic approach to meaning according to which meaning arises incrementally through context change. In SDRT, defaults are regarded as highly probable routes that an interpretation of a sentence may take in a particular situation of discourse. There are rules of discourse, so-called rhetorical structure rules, that produce such default interpretations. These rules spell out the overall assumption that discourse is coherent and that this coherence can be further elaborated on by proposing a set of regularities. For example, two events represented as two consecutive utterances are presumed to stand in the relation of Narration, where the event described in the first utterance precedes the one from the second utterance. If the second utterance describes a state, then it stands in the relation of Background to the first one. There are many other types of such relations, among them Explanation and Elaboration. Axioms prevent a relation from being of two incompatible types at the same time. The relations between states and events are computed as strong probabilities, in the process called defeasible reasoning. The laws of reasoning are ‘defeasible’ in the sense that if the antecedent of a default rule is satisfied, then its consequent is normally, but not always, satisfied. The inference normally, but not always, obtains: ceteris paribus, the relation predicted by the law obtains, but in certain circumstances it may not. It is also nonmonotonic in that the relation may disappear with the growth of information.

SDRT includes the following components: (i) the semantics of sentences alone, that is the underspecified output of the syntactic processing of the sentences; (ii) the semantics of information content, that is, further addition to these underdetermined meanings, including default additions summarised by rhetorical structure rules; and (iii) the semantics of information packaging that ‘glues’ such enriched representations by means of the rules of the rhetorical structure of discourse. This ‘gluing together’ is defeasible, in that the rules result in the dependency A>B, that is ‘if A, then normally B’, where A and B stand for the enriched propositional representations of two sentences. In other words, they stand for the meanings of two consecutive utterances.

The main strength of this approach is that it is fully formalized and it allows for computational modelling of discourse that takes pragmatic links between utterances seriously and incorporates them in the semantics. Next, it also aspires to cognitive reality and although the cognitive reality of the particular rules can be disputed, the view of discourse processing that they jointly produce is highly plausible. Finally, as the authors often stress, SDRT allows them, for most part, to model discourse without recourse to speakers’ intentions. However, a direct comparison with Gricean accounts of defaults is precluded by the fact that we would not be comparing like with like. In SDRT, the default interpretations are the defaults that are formalized with respect to the actually occurring discourse: there are rules that tell us how to take two events represented in two consecutive sentences, there are also rules that specify the relation between them depending on some features of their content. Gricean defaults are, on the contrary, defaults for speakers’ overall knowledge state: they may arise because the speaker did not say something he or she could have said or because the speaker assumed some cultural or social information to be shared knowledge. For example, we cannot formalize the interpretation of (9a) as (9b) by means of rhetorical structure rules. The interpretation of (9a) as (9b) fits under the SDRT component (ii) rather than (iii) above, i.e., the semantics of information content rather than packaging.

Pablo’s painting is of a crying woman.
Picasso’s painting is of a crying woman.

Finally, it has to be mentioned that the discourse relations that for Asher and Lascarides belong to the ‘glue logic’ can alternatively be conceived of as part of the grammar proper: Lepore & Stone (2015), for example, incorporate conventions into minimalistically understood, grammar-driven semantics, and a fortiori into grammar proper; following Lewis’s (1979) ideas on convention and ‘scorekeeping’, they propose that “semantics describes interlocutors’ social competence in coordinating on the conversational record” (Lepore & Stone 2015: 256). Merits of putting conventions into grammar are, however, not easy to find (for a review see Jaszczolt 2016b).

1.5 Computational Semantics Landscape

The computational semantics landscape contains a few landmarks in which the concept of a default figures prominently, albeit under different labels. I have already discussed the role of defaults and inheritance reasoning in artificial intelligence research in the example of SDRT. This kind of research in computational linguistics is arguably the closest to theoretical linguistic semantics and pragmatics in that it directly appeals to human practices in reasoning. Pelletier & Elio (2005) refer to this characteristic as the psychologism of nonmonotonic logics, and thus a property that was so fiercely banished from logic by Frege as a form of a ‘corrupting intrusion’, in that ‘being true is quite different from being held as true’ (Frege 1893: 202). Pelletier and Elio write:

Unlike most other reasoning formalisms, nonmonotonic or default reasoning, including inheritance reasoning, is “psychologistic” – that is, it is defined only as what people do in circumstances where they are engaged in “commonsense reasoning”. It follows from this that such fundamental issues as “what are the good nonmonotonic inferences?” or “what counts as ‘relevance’ to a default rule?”, etc., are only discoverable by looking at people and how they behave. It is not a formal exercise to be discovered by looking at mathematical systems, nor is it to be decided by such formal considerations as “simplicity” or “computability”, etc. Pelletier & Elio (2005: 30).

Other landmarks include research on default feature specification in syntactic theory and default lexical inheritance (e.g., Gazdar et al. 1985; Boguraev & Pustejovsky 1990; Lascarides et al. 1996), where default inheritance comes from a simple idea pertaining to all taxonomies: regular features belonging to an entity of a certain type are inherited from the categories higher up in the taxonomic hierarchy, that is simply by virtue of the membership of a certain ontological type. As a result, only the non-default features have to be attended to (on various semantic networks in computational linguistics see also Stone 2016). To generalize, this line of research can lead to incorporation of information into logical forms, including, as can be seen in the example of SDRT, dynamic logical forms of discourses. In a different camp there are statistical, distributional approaches to meaning where meaning is derived from information about co-occurrence of items gleaned from corpora and then quantitatively analysed. This orientation gave rise to current vector-based approaches (see, e.g., Jurafsky & Martin 2017 [Other Internet Resources]; Coecke et al. 2010 and for discussion Liang & Potts 2015). Vector semantics exploits the finding that dates back at least to Harris (1954) and Firth (1957) that the meaning of a word can be computed from the distribution of the words in its immediate context. The term ‘vector semantics’ derives from the representation of the quantitative values in this distribution called a ‘vector’, where the latter is defined as a distributional model that presents information in the form of a co-occurrence matrix. Vectors have been around since the 1950s but it is only recently that such distributional methods have been combined with logic-based approaches to meaning (see Liang & Potts 2015). Vectors can measure the similarity of texts with respect to a lexical item, the similarity of lexical items with respect to sources, or, what interests us most, the co-occurrence of selected words in a selection of contexts (using additional methods to rule out co-occurrence by chance). In distributional semantics therefore the salient or default meaning is the meaning given by the observed high co-occurrence or, in other words, delimited by the high conditional probability of its occurrence in the context of other words.

Current compositional semantics is beginning to combine compositional semantic theory (logic-based approaches discussed above) with statistical models, conforming to the standard view of compositionality on which complex meanings are a function of lexical meanings and the mode of combination, arrived at through a recursive process, but at the same time aiming at capturing the generalization from (finite) past experiences that would inform machine learning. Defaults arise in this context in several different forms: (i) as shortcuts to standard meanings of more semantically predictable categories, that is, closed-class words such as determiners, pronouns or sentential connectives. (This can be extended perhaps to types of predictable projective content such as various types of implicature or presupposition; see Tonhauser et al. 2013); (ii) as predictable cross-sentential discourse relations; (iii) as predictable discourse-anaphoric links; (iv) as meaning arising from frequent syntagmatic associations; (v) as meaning arising from frequent conversational scenarios, to name a few salient concepts. In this new, positively eclectic orientation in computational linguistics that combines logical and statistical approaches, the label ‘default’ is likely to lead to more confusion than utility in that it can pertain to either of the two contributing orientations. On the other hand, if the findings lead to the same set of what we can call ‘shortcuts through possible interpretations’, the confusion may be of merely a methodological rather than ontological importance.

1.6 Defaults in Optimality-Theory Pragmatics

Optimality-Theory pragmatics (OT pragmatics, Blutner 2000; Blutner and Zeevat 2004; ) is another attempt at a computational modelling of discourse but unlike SDRT it makes use of a post-Gricean, intention-based account of discourse interpretation. The process of interpretation is captured in a set of pragmatic constraints. The pragmatic additions to the underdetermined output of syntax are governed by a rationality principle called an optimization procedure that is spelled out as a series of constraints. These constraints are ranked as to their strength and they are defeasible, that is, they can be violated (see Zeevat 2000, 2004). The resulting interpretation of an utterance is the outcome of the working of such constraints. OT pragmatics formalizes and extends the Gricean principles of cooperative communicative behaviour as found in Horn (1984) and Levinson (1995, 2000). For example, STRENGTH means preference for readings that are informationally stronger, CONSISTENCY means preference for interpretations that do not conflict with the extant context, FAITH-INT stands for ‘faithful interpretation’, that is interpreting the utterance without leaving out any aspect of what the speaker says. The ordering of these constraints is FAITH-INT, CONSISTENCY, STRENGTH. The interaction of such constraints, founded on Levinson’s heuristics, explains how the hearer arrives at the intended interpretation. At the same time, this model can be regarded as producing default, presumed interpretations. With respect to finding an antecedent for an anaphor, for example, the interaction of the constraints explains the general tendency to look for the referent in the immediately preceding discourse rather than in the more remote fragments or, rather than constructing a referent ad hoc. In other words, it explains the preference for binding over accommodation (van der Sandt 1992, 2012).

Defaults in OT pragmatics combine the precision of a formal account with the psychological reality of Gricean intention-based explanations. The main difference is that they don’t seem to be defeasible: OT pragmatics tells us how an actual interpretation arose, rather than what the default interpretation could be. Constraints are ranked, so to speak, post hoc: they explain what actually happened and why, rather than what should happen according to the rules of rational communicative behaviour. In other words, context is incorporated even sooner into the process of utterance interpretation than in Gricean accounts and allows for non-defeasible, albeit standard, default, interpretations. With respect to this feature they resemble defaults of Default Semantics discussed in Section 1.8.

1.7 Defaults in Truth-Conditional Pragmatics

In truth-conditional pragmatics (Recanati, e.g., 2004, 2010), the meaning of an utterance consists of the output of syntactic processing combined with the output of pragmatic processing. Pragmatic processing, however, is not necessarily fulfilled by conscious inference: processes that enrich the output of syntax are sub-doxastic, direct, and automatic. The resulting representation of utterance meaning is the only representation that has cognitive reality and it is subject to truth-conditional analysis. On this account, the content of an utterance is arrived at directly, similar to the act of perception of an object. Recanati calls this view anti-inferentialist in that “communication is as direct as perception” (Recanati 2002: 109): the processing of the speaker’s intentions is (at least normally) direct, automatic, and unreflective. Such processes enriching the actually uttered content are called primary pragmatic processes. Some of them make use of contextual information, others are context-independent. So, they include some cases of Grice’s GCIs as well as some particularised implicatures (PCIs; on implied content see also Tonhauser et al. 2013) – but only the ones which further develop the logical form of the uttered sentence. When the pragmatic addition constitutes a separate thought, it is, on this account, an implicature proper, arrived at through a secondary, conscious, and reflective pragmatic process.

There are two kinds of enrichment of the content obtained through the syntactic processing: (i) completing of a semantically incomplete proposition as in (10b), called saturation, and (ii) further elaboration of the meaning of the sentence that is not guided by any syntactic or conceptual gaps but instead is merely triggered by the hearer’s opinion that something other than the bare meaning of the sentence was intended, as in (11b). The latter process is called free enrichment.

The fence isn’t strong enough.
The fence isn’t strong enough to withstand the gales.
John hasn’t eaten.
John hasn’t eaten dinner yet.

Default interpretations are here defaults for processing of an utterance in a particular context. Automatic and unconscious enrichment produces a default interpretation of the utterance and “[o]nly when there is something wrong does the hearer suspend or inhibit the automatic transition which characterizes the normal cases of linguistic communication”. (Recanati 2002: 109). To sum up, such defaults ensue automatically, directly, without the effort of inference. They are cancellable, they can make use of contextual clues, but they are not ‘processes’ in any cognitively interesting sense of the term: they don’t involve conscious inference, albeit, in Recanati’s terminology, they involve inference in the broad sense: the agent is not aware of performing an inference but is aware of the consequences of this pragmatic enrichment of the interpreted sentence.

1.8 Types of Defaults in Default Semantics

One of the main questions to ask about any theory of utterance interpretation is what sources information about meaning comes from. In Default Semantics, (Jaszczolt, e.g., 2009, 2010, 2016a, 2021), utterance meaning is the outcome of merging of information that comes from five sources: (i) word meaning and sentence structure (WS); (ii) situation of discourse (SD); (iii) properties of human inferential system (IS); (iv) stereotypes and presumptions about society and culture (SC); and world knowledge (WK). WS is the output of the syntactic processing of the sentence, or its logical form. SD stands for the broadly understood context in which the discourse is immersed. IS pertains to properties of mental states which trigger certain types of interpretations. For example, the property of intentionality ensures that we normally use referring expressions with a referential intention that is the strongest for the given context. SC pertains to the background knowledge of societal norms and customs and cultural heritage. WK encompasses information about physical laws, nature, environment, etc. It is important to stress that the four sources that accompany WS do not merely enrich the output of the latter. All of the sources are equally powerful and can override each other’s output. This constitutes a substantial breakaway from the established boundary between explicit and implicit content.

The identification of the sources also allows us to propose a processing model in Default Semantics in which three types of contribution to utterance interpretation are distinguished: (i) processing of the sentence (called combination of word meaning and sentence structure, WS); (ii) conscious pragmatic inference (CPI) from three of the sources distinguished above: SD, SC, and WK; and (iii) two kinds of default, automatic meanings: cognitive defaults (CD) triggered by the source IS, and social, cultural and world-knowledge defaults (SCWD).

The primary meaning is arrived at through the interaction of these processes and therefore need not bear close resemblance to the logical form of the sentence; the output of WS can vary in significance as compared with the output of other types of processes. For example, to borrow Bach’s (1994) scenario, let us imagine little Johnny cutting his finger and crying, to which his mother reacts by uttering (12a).

You are not going to die.

The what is said/explicature of (12a) is something to the effect of (12b). There may also be other communicated meanings but those fall in the domain of implicatures.

You are not going to die from this cut.

In Default Semantics, the primary content of an utterance is its most salient meaning. This is so even when this meaning does not bear any resemblance to the logical form derived from the syntactic structure of the uttered sentence. In other words, CPI can override WS and produce, say, (12c) as utterance meaning (called primary meaning, represented in a merger representation) for the given context. The explicit content of the utterance need not be even partially isomorphic with the meaning of the uttered sentence: it need not amount to the development of the sentence’s logical form.

There is nothing to worry about.

CDs and SCWDs are default interpretations. Similar to Recanati’s automatic free enrichment, these default meanings cut across Grice’s GCI/PCI divide. Some of them arise due to the properties of words or constructions used and are present by default independently of the context of the utterance, while others are default meanings for the particular situation of discourse. CDs are default interpretations that are triggered by the properties of mental states. For example, when speakers use a definite description in an utterance, they normally use it referentially (about a particular, known, intersubjectively recognisable individual) rather than attributively (about whoever fits the description). This default referential use can be given a functional as well as a cognitive explanation. Firstly, it can be explained in terms of the strength of the referential intention associated with the act of utterance: ceteris paribus, humans provide the strongest information relevant and available to them. At the same time, in cognitive terms, it can be explained through the property of mental states that underlie the speaker’s speech act: this is the property of intentionality or aboutness, in the sense in which the mental state is about a particular object, be it a person, thing, or situation. Like the strongest referring, so the strongest aboutness, is the norm, the default. For example, the description ‘the architect who designed St Paul’s cathedral’ in (13a) is likely to be interpreted as ‘Christopher Wren’, as in (13b).

The architect who designed St Paul’s cathedral was a genius.
Sir Christopher Wren was a genius.

Next, SCWDs are default interpretations that arise due to the shared cultural and social background of the interlocutors. To use a well worn example, in (14a), it is the shared presumption that babies are raised by their own mothers that allows the addressee to arrive at (14b).

The baby cried and the mother picked it up.
The baby cried and the baby’s mother picked it up.

In CDs and SCWDs, no conscious inference is involved. The natural concomitant of reducing the role of the logical form (WS) to one of four equally potent constituents of utterance meaning is a revised view of compositionality. The compositional nature of meaning is retained as a methodological assumption but this compositionality is now sought at the level of the merger of information from the five sources, arrived at through the interaction of the four identified processes. The output of these processes is called merger representation and is expected to be a compositional structure. Current research focuses of providing an algorithm for the interaction of the output of the identified processes.

2. Definitional Characteristics of Default Interpretations

It is evident from the sample of approaches presented above that the notion of default meaning is used slightly differently in each of them. We can extract the following differences in the understanding of default interpretations:

[1a] Defaults belong to competence.
[1b] Defaults belong to performance.

[2a] Defaults are context-independent.
[2b] Defaults can make use of contextual information.

[3a] Defaults are easily defeasible.
[3b] Defaults are not normally defeasible.

[4a] Defaults are a result of subdoxastic, automatic process.
[4b] Defaults can involve conscious pragmatic inference.

[5a] Defaults are developments of the logical form of the uttered sentence.
[5b] Defaults need not enrich the logical form of the sentence but may override it (which is orthogonal to the question as to whether defaults have to be literal meanings)

[6a] Defaults can all be classified as one type of pragmatic process.
[6b] Defaults come from qualitatively different sources in utterance processing.

There is also disagreement concerning the following properties, to be discussed below:

[7a] Defaults are always based on a complete proposition.
[7b] Defaults can be ‘local’, ‘sub-propositional’, based on a word or a phrase.

[8a] Defaults necessarily arise more quickly than non-default meanings. Hence they can be tested for experimentally by measuring the time of processing of the utterance.
[8b] Defaults do not necessarily arise more quickly than non-default meanings because both types of meaning can be based on conscious, effortful inference. Hence, the existence of defaults cannot be tested experimentally by measuring the time of processing of the utterance.

The ‘whose meaning’ question also give rise to controversies, such as those discussed in Section 4:

[9a] Default meanings are static: they are intended, or recovered, or (normally) both.
[9b] Default meanings are dynamic, interactional, and as such are the result of the joint construction of meaning in conversation.

Questions on the boundary of linguistics and ethics, linguistics and epistemology, or linguistics and law (to name a few) also give rise to rival positions, such as:

[10a] Speakers are accountable only for the minimal semantic content of their utterances.
[10b] Speakers are accountable for standard, default (in one of the above senses) interpretations or their utterances.

This may lead to more specific controversies, such as:

[10a′] What counts as a lie or an insult pertains to default (in one of the above senses) content.
[10b′] What counts as a lie or an insult pertains to minimal semantic content.

These are addressed in Section 5.

[1]–[8] are the most standardly accepted characteristics of default interpretations in theoretical semantics and pragmatics. We shall not include here definitional characteristics of defaults in computational linguistics as these are a subject for a separate study. Some of the properties in [1]–[8] are interrelated, some of the others just tend to occur together. Levinson’s presumptive meanings, for example, are defeasible, i.e., fulfil [3a], local [7b], pertain to competence [1a], and are faster to process than inferential meanings [8a]. They are competence defaults of the type [1a] because they arise independently of the situation of discourse and are triggered by the construction alone, due to the presumed default scenario that it pertains to. For example, scalar inference from ‘many’ to ‘not all’ is a case of a competence-based, context-independent, local default. Similarly, the rhetorical structure rules of SDRT give rise to competence defaults. (15b) is a result of the common, shared knowledge that pushing normally results in falling.

You pushed me and I fell.
You pushed me and as a result I fell.

As regards feature 7, it is at least conceivable that presumed meanings arise as soon as the triggering word or construction has been processed by the hearer. For Levinson (1995, 2000), salient meanings have this property of arising even before the processing of the sentence is completed but can subsequently be cancelled if further context witnesses against them.. In other words, they arise pre-propositionally or locally. Discourse interpretation proceeds incrementally and similarly the assignment of default meanings to the processed segments is incremental. For example, the scalar term many in (16a) triggers the presumed meaning not all as soon as it has been processed. The subscript d in (16b) stands for the default meaning and is placed immediately after the triggering construction.

Many people liked Peter Carey’s new novel.
Many (d many but not all) people liked Peter Carey’s new novel.

Similarly, ‘paper cup’ and ‘tea cup’ give rise to presumed meanings locally, as in (17b) and (18b) respectively.

Those paper cups are not suitable for hot drinks.
Those paper cups (d cups made of paper) are not suitable for hot drinks.
I want three tea cups, three saucers and three spoons please.
I want three tea cups (d cups used for drinking tea), three saucers and three spoons please.

Inferences such as those in (17b) and (18b) are very common. They are, however, substantially different from the inference in (16b) in that the resulting meaning is the lexical meaning of the collocation, similar to that of a compound. Other examples include ‘pocket knife’ vs. e.g., ‘bread knife’, and ‘coffee spoon’ vs. e.g., ‘silver spoon’. It is worth remembering that on Levinson’s account, presumed, salient interpretations can be explained through the principles of rational communicative behaviour summed up as his Q, I and M heuristics (see Section 1.2 and Levinson 1995, 2000). (16b) arises through the Q-heuristic, ‘What isn’t said isn’t’, while (17b) and (18b) arise through the I-heuristic, ‘What is expressed simply is stereotypically exemplified’. Most generally, the defaults that arise through the Q-heuristic exploit a comparison with what was not, but might have been, said. For example, ‘most’ triggers an inference to a denial of a stronger item ‘all’; ‘believe’ triggers an inference to ‘not know’. At the same time, they are all easily cancellable, as (16c) illustrates.

Many, and possibly all, people liked Peter Carey’s new novel.

I-heuristic exploits only what there is in the sentence: it is an inference to a stereotype and as such is not so easily cancellable. For example, (19) and (20) seem rather bizarre.

Those paper cups, I mean cups used for storing paper, are full.
I want three tea cups, I mean cups used for storing tea leaves.

Perhaps the fact that these defaults are not so easily cancellable comes from their property of resembling lexical compounds and, like in the case of compounds, the link between the juxtaposed lexemes is very strong in their case. If indeed it is plausible to treat them on a par with compounds, then they are not very useful as a supporting argument for local defaults: instead of defaults, we have lexical meaning of compounds.

Local defaults allow us to dispose of the level of an underspecified propositional representation in semantic theory. Since the inferences proceed incrementally, then as soon as the triggering expression is encountered, there is no level of a minimal proposition that would constitute a foundation for further inferences. If there is one, it is just accidental, in that the triggering item may just happen to be placed at the end of the sentence, for example ‘tea cups’ in the first clause of (20) above. But it is also important to note that the status of such defaults is still far from clear. For example, Levinson’s defaults are local, but at the same time ‘cancellable’ to the extent that the context may prevent them from arising. This leads to a difficulty in examples such as (21)–(22).

You are allowed five attempts to get the prize.
You are allowed to do five minutes of piano practice today because it is late.

It is clear that in (21) ‘five’ is to be understood as ‘at most five’. How are we to model the process of utterance interpretation for this case? Are we to propose that the inference from ‘at least five’ to ‘exactly five’ takes place and is then cancelled? Or are we to propose that ‘five’ is by default ‘at least five’ (or underdetermined five, or ‘exactly five’, depending on the orientation (see Horn 1992; Koenig 1993; Bultinck 2005) and becomes altered in the process of pragmatic inference to ‘at most five’ in the context of ‘allow’? But then, ‘allow’ is also present in (22) and the inference to ‘at most’ is not at all salient: doing a longer piano practice is generally preferred but may not be what the addressee likes doing and ‘five’ may end up, in this context, to mean ‘as little as five’ or ‘five or more’, stressing that more than five is not expected but allowed. In (23), the problem is even more salient. If ‘five’ triggers locally the ‘exactly’ meaning, then the default has to be cancelled immediately afterwards when ‘are needed’ has been processed and the ‘at least’ interpretation becomes obvious.

Five votes are needed to pass the proposal.

Alternatively, we can stipulate that the first inference takes place after the word ‘needed’. It is clear that a lot needs to be done to clarify the notion of local defaults: most importantly, (i) what counts as the triggering unit, (ii) to what extent context is consulted, and (iii) how common cancellation is. But it seems that if defaults prove to be so local as to arise out of words or even morphemes, then they are part of the computational power of grammar and they belong to grammar and lexicon rather than to semantics and pragmatics. Chierchia (2004) and Landman (2000) represent this view. Chierchia argues that since scalar implicatures do not arise in downward-entailing contexts (contexts that license inference from a set to its subset), there is a clear syntactic constraint on their behaviour (but see Chemla et al. 2011). Jaszczolt (2012) calls such units that give rise to defaults or inferential modification ‘fluid characters’, employing Kaplan’s (1989) content-character distinction, to emphasise the fact that the unit of meaning that leads to inference or to a default meaning varies from context to context and from speaker to speaker: characters are ‘fluid’ because they correspond to ‘flexible inferential bases’ or ‘flexible default bases’. But much more theorizing and substantial empirical support are needed to establish the exact size of such local domains and the corresponding fluid characters.

As far as feature [8] is concerned, experimental work helps decide between [8a] and [8b], measuring the recovery time for the default meaning as opposed to the non-default one. The development of the ability to use scalar inferences has also been tested (Noveck 2001, 2018; Papafragou & Musolino 2003; Musolino 2004; Noveck and Sperber 2004; Geurts 2010; see also contributions to Cummins and Katsos 2019 ). It has been argued on the basis of some evidence that default interpretations are not faster to produce and can be absent altogether from processing in the case of five-year old subjects. Noveck (2004) provides the following evidence against Levinson’s automatic and fast defaults. Children were presented with some descriptions of situations in which the order of the events was inverted in narration. They had to assess whether the description was true or false. The outcome was that the children who agreed with the inverted description reacted faster than the ones who disagreed. It was then concluded that enriching ‘and’ to ‘and then’ is not automatic: it takes time. And, if pragmatically enriched responses take longer, then they cannot be the default ones (see Noveck 2004: 314). Similarly, with scalar terms, if one could demonstrate that the enriched readings, such as ‘some but not all’ for ‘some’, arise faster than ‘some but not necessarily not all’, one would have strong evidence in support of the defaults view.

The problem is that all these experiments assume Levinson’s notion of a fast and inference-free default while this is, as we have seen, by no means the only understanding of default interpretations, and, arguably, not even the most commonly assumed one. The experimenters talk of arguments for and against ‘the Default View’, ‘the Default Model’ (see also Bezuidenhout and Morris 2004, Breheny et al 2006), while, in fact there is no such unique model to be falsified: there are very different understandings of defaultness even in post-Gricean pragmatics alone, as is evident from Section 1. The list of possible defining characteristics of default interpretations in [1]–[8] shows that one cannot talk about the default meaning. At the same time, it is much harder to provide any experimental evidence for or against salient meaning that draw on some contextual information, arise late in utterance processing, and are not normally cancellable. The latter also seem much more intuitively plausible than Levinson’s rigid defaults in that they are simply shortcuts through costly pragmatic inference and as such can be triggered by the situation itself rather than by the properties of a lexical item or construction at large. They are just normal, unmarked meanings for the context at hand and it is not improbable that such default, salient interpretations will prove to constitute just the polar end of a scale of degrees of inference rather than have qualitatively different properties from non-default, clearly inference-based interpretations. They will occupy the area towards the ‘zero’ end of the scale of inference but will not trigger the dichotomy ‘default vs. inferential interpretation’. But since it is debatable whether salience ought to be equated with defaultness in he first place, our terminological quandary comes back with full strength (see Section 3).

It is also difficult to pinpoint the boundary between default and non-default interpretations when we allow context and inference to play a role in default meanings, that is when we allow [2b] and [8b]. This does not mean, however, that we should pour them out with the bath water and resort to proposing nonce-inference in the case of every single utterance produced in discourse. When context-dependence of defaults is allowed, then the main criterion for such meanings is their subdoxastic arrival. When conscious inference is allowed, then the main criterion is the fact that only minimal contextual input is allowed, such as, say, the co-text in (24). In (24), the definite description ‘the first daughter’ has the attributive rather than referential reading.

The first daughter to be born to Mr and Mrs Brown will be called Scarlett.

On a traditional Gricean view of post-propositional, sentence-based pragmatic inference, we have here the default attributive reading: the expression ‘to be born’ and the future auxiliary ‘will’ signal that no particular, extant, known individual is referred to. This is also the view followed in Default Semantics (Section 1.8) where both inference and defaults are ‘assumed to be global’ – ‘assumed’ in the sense of a methodological assumption put in place until we have the means to test the actual length of fluid characters and the content of the corresponding default bases. In other words, information arrived at through WS merges with that from CPI, CD and SCWD when all of the WS is ready. But in Default Semantics there is no default involved in (24): we have WS merging with CPI to produce the attributive reading. On Levinson’s presumptive meanings account (Section 1.3), it can be stipulated that (24) would fall in-between GCIs and PCIs: the only context that is required is the sentence itself, so the example is not different from any other cases of GCIs. But the locality of the GCI is the problem: depending on how we construct the length of the triggering expression, we obtain a GCI or a PCI. When we construe it as ‘the first daughter’, the sub-part of the definite noun phrase, then we obtain the referential reading as the default, to be cancelled by ‘to be born’. In short, we don’t know yet, at the current state of theorizing and experimenting, which of the potential defining characteristics of defaults to employ. Neither are we ready to propose the demarcation line between default and non-default interpretations. We can conceive of the first as shortcuts through inference but such a definition will not suffice for delimiting a category. We can, however, concede that default interpretations are governed by principles of rational behaviour in communication, be it Gricean maxims, neo-Gricean principles or heuristics, the logic of information structuring of SDRT, or a version of defeasible logic as presented above.

All in all, it appears that the diversity of default interpretations pertains not only to their features listed in [1]–[8] but also to their provenance. This diversified use makes the term heavily theory-dependent. Next, we can move to orthogonal, albeit not less important, discussions of defaults vs. salience and defaults vs. literal meaning.

3. Defaults, Salience, and Literalness

In pragmatic theory, the term ‘default’ is often used in association with the term ‘salience’, so it is important to clarify the similarities and differences between them. For Giora (e.g., 2003; Giora & Givoni 2015), salience and defaultness are two different concepts. Salience depends merely on ‘accessibility in memory due to such factors as frequency of use or experiential familiarity’ (Giora 2003: 33). For her, salience concerns meanings, while defaultness concerns interpretations:

Defining defaultness in terms of an automatic response to a stimulus predicts the superiority of default over non-default counterparts, irrespective of the degree of non-salience, figurativeness, context strength…. (Giora & Givoni 2015: 291)

So, salience is here a graded phenomenon (see here Giora 2003 for her experimentally supported Graded Salience Hypothesis). Salience is also independent from the literalness of the interpretation: the highly accessible meaning is not always the literal one. According to Giora (2003: 33), ‘literality is not a component of salience’. The latter is caused by experience, frequency of use, and as such is reflected in the accessibility in memory. Her experiments demonstrated that ‘familiar instances of metaphor and irony activated their salient (figurative and literal) meanings initially, regardless of contextual information’ (p. 140).

Reconciling defaultness with salience may, however, be problematic in particular cases, leading to the proposal of the degrees of defaultness (Giora & Givoni 2015), and to a rather counterintuitive diluting of the concept of a default. For example, sarcasm can rely on non-salient but default interpretation – non-salient when the interpretation is compositionally put together instead of being processed as a conventionalised unit.

On the other hand, for Jaszczolt (e.g., 2016a) defaultness relies precisely on salience that leads to automatic meaning retrieval. She calls this view Salience-Based Contextualism: the meaning of an utterance is derived through a variety of interacting processes, some of which rely on automatic interpretations such as cognitive defaults or socio-cultural and world knowledge defaults identified in the theory of Default Semantics and discussed in Section 1.8. Such defaults are defaults for the context and for the speaker, but salience that predicts them is not. According to Salience-Based Contextualism, words and structures can trigger salient, automatically retrieved meanings. This is guaranteed by the fact that language is a socio-cultural as well as a cognitive phenomenon and as such is shaped by its common use in discourse on the one hand, and by the structure and operations of the brain on the other (see Jaszczolt 2016a: 50). Salience is situation-free (although not always co-text free – see above on fluid characters), defaultness is not: it is easy to imagine a speaker who does not make use of the available salient interpretation because he or she lacks the necessary sociocultural background, knowledge of the laws of physics, or is guided by the context towards a different interpretation. Defaultness applies to a flexible unit on the basis of which an interpretation is formed (a fluid character). Defaults in conversation result from emergent intentionality (Haugh 2008, 2011): they rely on the process of co-construction of meaning discussed in Section 4. So, conversational defaults subsume salient meanings – literal or not, such as Giora’s salient meanings understood as conventional, prototypical, familiar, or frequent.

One of the important corollaries of this salience-based defaultness is the possible redundancy of the literal/nonliteral distinction. If words exhibit strong influences on each other in a string as well as influences from the situation of discourse (sometimes called ‘lateral’ and ‘top-down influences’, respectively – see e.g., Recanati 2012), then we have little reason for postulating literal meaning. For example, in (25), we have little reason for postulating ‘literal’ meanings of ‘city’ or ‘asleep’: either of the words may accommodate to fit the other, and the appropriate interpretation will follow.

The city is asleep. (from Recanati 2012: 185).

If ‘city’ means ‘inhabitants of the city’, ‘asleep’ applies to it directly. If it means the place with its infrastructure, ‘asleep’ has to adjust to mean something to the effect of ‘quiet’, ‘dark’ or ‘showing no movement’. But neither of the processes has a clear point of departure: there is no obvious, clearly definable move from literal to non-literal. Such co-occurrence comes with degrees of salience of certain meanings. Such options can also be viewed as ‘probabilistic meanings’ that are ‘contextually affirmed’, to use Allan’s (2011: 185) terminology, through nonmonotonic inferences either in the co-text or by some other factor in the common ground.

However, probabilistic meanings presuppose ambiguity, while in general the salience- and default-oriented semantics and pragmatics relies on the assumption of underspecification. While in the case of (26)–(27) an ambiguity account appears justified when viewed from the perspective of the inventory of the lexicon in a language system (in that one would expect an analogy from ‘leopard’ and ‘fox’ to ‘lamb’ and ‘goat’, while in the context of the sentence this analogy is not present, as shown in (26a) and (27a)), most cases of lexical adjustment cannot be traced back to properties of entries in the lexical inventory.

Jacqueline prefers leopard to fox.
Harry prefers lamb to goat.
Jacqueline prefers leopard skin to fox fur.
Harry prefers eating lamb to eating goat.

(from Allan 2011: 180). Probabilistic meaning, according to Allan, ought to be included in the lexicon: a lexeme ought to be listed in the lexicon with different interpretations, annotated for their probability and circumstances in which this meaning is likely to occur. He calls such probabilistic meanings ‘grades of salience’.

It is evident that this proposal brings us to the territory of vector-based semantics in computational linguistics, while, on the other hand, accounts based on intention- and context-driven adjustment such as Recanati’s and Jaszczolt’s pull in the direction of post-Gricean pragmatics. But as Sections 1.1–1.8 demonstrate, the two traditions are not necessarily incompatible. Lexical salience and radical contextualism about the lexicon point in the same direction as distributional accounts in computational semantics: we have probabilities of certain meanings because these are meanings derived from the ‘company a word keeps’, to adapt Firth’s (1957: 11) famous dictum. All in all, content words are strongly context-dependent – to the extent that perhaps indexicality ought to be viewed not as a defining feature of some lexical items but as a gradable feature of the entire lexicon. But what counts as indexicality is a separate theoretical question that cannot be pursued here.

Next, there is one more reason why salience has to be clearly distinguished from defaultness. Let us consider demonstratives. The object referred to by using ‘that’ can be located with the help of (i) the recognition of the speaker’s intention, or (ii) the act of pointing, or even (iii) the presence of a particular prominent object in the visual field of the interlocutors. All these combine to delimit the concept of salience: such an object has to be (made) salient for the linguistic demonstration to succeed. Here salient meaning is entirely, or almost entirely (allowing for the grammatical rendering of e.g., the proximal/distal distinction) determined by the given context and by the speaker’s knowledge that it is the relevant context (see Lewis 1979 on scorekeeping; Cappelen and Dever 2016 for a discussion). What is important for us here is that salience can be produced by the use of a context-dependent term: objects are brought to salience by the use of an indexical. Salience so understood is still compatible with the situation-independent concept of salience discussed above (to distinguish it from defaultness) in that it is the semantic meaning, the character (Kaplan 1989) of the demonstrative that triggers the bringing-to-salience process.

Now, on the one hand, linguistic research informs us that expressions with thin semantic content such as anaphorically used demonstrative pronouns or personal pronouns are employed for referents whose cognitive status is high. In other words, they are used when the object is in focus of attention or at least activated in memory (Gundel et al. 1993; see Jaszczolt 2002: 140–149 for a discussion). On the other hand, when combined with an act of demonstration, the object can be made salient. Cappelen and Dever (2016) discuss here two types of successful referring by demonstratives: pointing and intending (i.e. (i) and (ii) above), contrasted with prominence (i.e. (iii) above). The first creates salience, while the latter pertains to extant salience; the first brings entities into focus, while the latter exploits their in-focus cognitive status. What is of particular interest to semanticists (of both orientations discussed here, Gricean and computational) is that the first type allows for accommodation (Lewis 1979): objects are made more salient when communication requires it.

To conclude, salience clearly differs from defaultness but for expressions (words, phrases, sentences) for which delimiting default interpretations makes sense, salience can provide an explanans.

4. Co-construction of Meaning: Towards Dynamic, Interactional Defaults

It is not a new observation that interlocutors can have collective, or joint, intentions in communication. Joint construction of meaning has been attended to in approaches as different as game theory (Lewis 1979; Parikh 2010), action theory (Searle 1990) and conversation analysis (Sacks, Schegloff and Jefferson 1974). But it is only more recently that the joint construction (or co-construction) of meaning has begun to come to the forefront of pragmatics tout court. For example, Arundale’s so-called Conjoint Co-constituting Model of Communication focuses on the interactive construction of implicatures (Arundale 1999, 2010; Haugh 2007, 2008; Haugh and Jaszczolt 2012). Next, Elder and Haugh (2018) propose a model of complex inferential work, continuing for several conversational turns, that can account for mismatches between the expected and the actual inferences, as well as for the fact that speakers may not have precise intentions while issuing an utterance and be open to negotiating meaning. These aspects of communication had often been neglected, or even denied, in post-Gricean research.

It is no surprise that understanding communication as emergent meanings, as negotiating intentions and commitments (see e.g., Geurts 2019; Elder 2021) is taking the concept of default to the new territory of dynamic, co-constructed meanings. Elder and Haugh, for example, aim at modelling the joint construction of the meaning that is settled on by the interlocutors as the main content – as they say, akin to the ‘primary meaning’ of Default Semantics (Jaszczolt 2005) or to Ariel’s (2002) ‘privileged interactional interpretation’ – free from any formal constraints that the logical form of the uttered sentence might impose, but instead, and in addition, flowing freely, as conversation progresses, with dynamic, changing intentions. Such co-constructed meaning is steered by default interpretations that are assigned privately by each side but also sometimes rejected in favour of what progressively emerges as a default inference – an interactively constructed meaning that is sensitive to what is fit to be taken for granted, and as such sensitive to what other parties would, and do, take for granted. In short, dynamic, interactional pragmatics comes with dynamic, interactional defaults.

5. Defaults and Accountability

Interactional pragmatics discussed in the previous section highlights another important aspect of communication that was largely neglected in early pragmatic theories, namely speaker accountability. Haugh (2013: 53) presents it as follows:

First, what a speaker is held accountable for goes beyond the veracity of information to include other moral concerns, such as social rights, obligations, responsibilities and the like. Second, to be held interactionally accountable differs from inferring commitment. The former is tied to an understanding of speaker meaning as arising through incremental, sequentially grounded discourse processing …, while the latter is tied to a punctuated view of speaker meaning that arises at the level of utterance processing.

As he says, it is important to view speaker meaning as a deontological concept – that is, view it from the perspective of moral philosophy and norms of what actions are permitted or not permitted in conversation. Viewing speaker meaning as deonotological puts emphasis on practical applications of speech, on such moral norms and on the consequences of conversational behaviour. And this is where defaultness comes to the fore, propelled by the question as to what kind of content speakers ought to be held accountable for. Should it be the kind of meaning that speakers feel committed to, or the kind of meaning that they, sometimes inadvertently, communicated? Or, perhaps, merely the minimal semantic content? Emma Borg (2019), for example, distinguishes between what she calls (i) strict liability, that is liability for the minimal proposition pertaining to one’s utterance, delimited according to the principles of her Minimal Semantics (Borg 2004, 2012), and (ii) conversational liability (a gradable concept), applied to potential interpretations of an utterance in the context of conversation. But such philosophical solutions don’t go far enough in that they merely offer a more fine-grained classification rather than implementable answers. Next, Elder and Haugh (2018) focus on ‘the most salient propositional meaning that is ostensively made operative between interlocutors’ (p. 595), which allows them to offer a proposal with more direct utility for discussions of moral responsibility. The concept of dynamic, operative meaning allows them to argue that it is the addressee’s response that makes the speaker accountable for the emergent meaning – the meaning that the interlocutors settle for in the next turn when the original speaker becomes aware of their ‘reflexive accountability’. But, again, feasibility of implementation has to wait for a model with some predictive power.

All in all, the concepts of commitment, liability (including legal liability) and accountablity are beginning to get to the forefront of philosophy of language, pragmatics, and contextualist semantics (and as such, necessarily, also metasemantics and metapragmatics, see Jaszczolt 2022), and take the concept of defaultness with them to the new dynamic, interactive dimension. The offshoots of those debates are ample. For example, they open up new perspectives on such questions as what counts as an assertion, insult, slur, or a lie: what kind of content ought we to include, and, on the other hand, what norms ought we to adopt? Here are a few snapshots.

Sanford Goldberg asks what constitutes the speech act of assertion, and what epistemic norm assertion should satisfy. He says that assertions are “speech acts in which a given proposition is presented as true, where this presentation has certain force (what we might call assertoric force)” (Goldberg 2015: 5) and that “[i]t is mutually manifest to participants in a speech exchange that assertion has a robustly epistemic norm; that is, that one must: assert that p, only if E(one, [p])” (p.96), where ‘E’ stands for a description of a mutually manifest epistemic standard. But this opens up the question as to what counts as asserted content, for linguistic as well as social, legal, and ethical purposes. Is it merely minimal content? But, perhaps, there is no such thing as minimal content in that here are no core meanings, no context-free concepts (cf. Rayo 2013)? Or, perhaps, one ought to opt for Levinson-style presumptive meanings (Section 1.3), or context-driven defaults (Sections 1.7–1.8), or even interactive meanings that emerge thanks to salience and defaultness of certain interpretations (Section 4)? Scope for theorizing and experimentation is still ample.

Relatedly, default meaning is an important concept for the discussion of deception, such as lying and misleading. Insincerity has recently engendered heated debates in linguistic pragmatics in that it is a moot point what exactly counts as lying. For example, is lying saying something that the speaker believes to be false? Or is it saying something with an intention to deceive but the content of which need not necessarily be false (see e.g., Stokke 2018)? And, what exactly counts as ‘saying’ for the purpose of lying (see e.g Saul 2012)? Further, can one lie through implicatures and presuppositions (Meibauer 2014)? And perhaps we ought to venure even further. As Heffer (2020: 6) says, “…we need to extend the scope of untruthfulness both from utterance insincerity (lying and misleading) to discursive insincerity (withholding), and from intentional insincerity to epistemic irresponsibility” (see also Carson 2010). Epistemic sincerity is the default in ordinary, ‘non-strategic’ contexts, so, lying, misleading, and bullshitting (that is, speaking without regard for truth or falsehood) can easily go through. But here the question as to what exact meaning the speaker is accountable for is intimately related to the question of conversational salience and defaults.

Now, it appears that for the debates at a crossroads of theory of meaning and ethics, the crucial understanding of defaultness is that of salient emergent meaning in conversation. As we have seen here, such defaults can be abused – a topic that is of interest to pragmatics as well as to theories of argumentation. Let us consider one more snapshot, this time an example of an argumentative strategy of decontextualisation, where instead of inferring the default interpretation (in the sense of the automatic, most salient interpretation for the context), the addressee reacts to the decontextualized semantic content of the speaker’s utterance. This is traditionally called ‘the fallacy of ignoring qualifications’ (secundum quid or secundum quid et simpliciter, loosely translatable as ‘what is true in a way and what is true absolutely’. It is a form of deceit that, as Macagno (2022) argues, is best explicated in a broader pragmatic context of ignoring not only what is explicit but also of what is implicit:

According to this view, the secundum quid does not result from the ‘suppression’ or ‘ignoring’ of explicit qualifications or evidence (…); rather, it is committed when an implicit qualification is reconstructed in a way that was not intended and could not be presumed. (Macagno 2022: 6)

As he says, post-Gricean pragmatic theories are indispensable here for explaining how this form of manipulation works, in that they can invoke principles and heuristics of how the intended meaning is recovered by the addressee. He focuses on contextualist accounts that capture the proposition expressed, or Recanati’s what is said and Sperber and Wilson’s explicature. Secundum quid is explained as a strategy of manipulation that makes use of the pragmatic process of enrichment, but relying not on the assumptions shared by the interlocutors but instead substituting unwarranted, strategically chosen ones:

The qualified or absolute interpretations are fallacious because they are different from the ‘default’, ‘salient’, or more generally ‘plain’ meaning. (Macagno 2022: 7)

The importance of such defaults for argumentation theory requires no further defence.

6. Concluding Remarks and Future Prospects

This comparison of various selected approaches to default interpretations in semantics and pragmatics allows for some generalizations. Firstly, it is evident from the surveyed literature that, contrary to the assumptions of some experimental pragmaticists, there is no one, unique ‘default model’ of utterance interpretation. Instead, default (and salient) meanings are recognised in many approaches to utterance interpretation but they are defined by different sets of characteristic features. Next, in the present state of theorizing, data-based analyses and experimenting, while the rationale for default interpretations is strong, some of the properties of such interpretations are still in need of further investigation. For example, the discussions of locality of defaults and their subdoxastic arrival are in need of empirical support before they can be taken any further. In other words, a ‘fluid character’ is in need of empirical identification – and even more so when we add the dynamic perspective of interactively achieved, co-constructed meanings.

Moreover, the principle and method for delimiting a default interpretation as distinguished from an inferential interpretation is still a task for the future. The existence of a shortcut through costly inference is an appealing and hardly controversial thesis but the exact properties of such meanings are still subject to disputes.

Next, the automatic arrival at context-dependent meanings has to be discussed as part of the debate between the direct access view and the modular view of language processing. Direct access predicts that context is responsible for activating relevant senses to the extent that the salience of the particular sense of a lexical item does not play a part. According to the modular view, lexical meanings that are not appropriate for the context are also activated, only to be suppressed at a further stage of processing. With the rise of theories that sit between these polar views, the question of the compatibility of the salience in the lexicon and the default status of utterance interpretations requires more attention. What can be attributed to the lexicon and what to the context of utterance remains an unresolved question.

Finally, whether we approach defaults through distributional computational semantics, theoretical truth-conditional semantics, post-Gricean pragmatics, or some version of a combined view, progress in research on defaults in human reasoning will necessarily require progress in technology. No matter how powerful our theories are, they will have to be tested either on large corpora, or through neuroimaging: more traditional methods of psycholinguistic experiments or small databases will always leave a wide margin of doubt as to ‘is it really how human reasoning works’? If there are big generalizations to be made regarding how we jump to conclusions, these will have to be modern equivalents of the 19th century phenomenological ideas, founded on intentionality of mental states, informativeness of acts of communication, all predicted by assumed efficiency (but not necessarily cooperation), but with access to modern methods of empirical corroboration (or falsification).

In this context, can we safely assume that the concept of default meaning will sail with the most successful approaches to human communication, or is it in need of precisification now, when it is so diversely used (and sometimes misused)? Conceptual engineering is often a risky business. But going along with the common-sense use is not so: meanings that ‘spring to mind’ are default meanings – it is just that for some purposes we want to talk about meanings and associations that spring to mind in the context of a particular discourse, and for others, in the language system alone. This degree of flexibility, further justified by diverse objectives in employing the concept – as diverse as, say, making an interlocutor accountable for a strongly conveyed but ’unsaid’ insult on the one hand, and devising algorithms for training machines to use language to communicate on the other, is probably not so difficult for theorists of meaning to live with.


  • Allan, K., 2011, “Graded Salience: Probabilistic Meaning in the Lexicon”, in Salience and Defaults in Utterance Processing, K. M. Jaszczolt & K. Allan (eds.), Berlin: De Gruyter Mouton, 165–187.
  • Ariel, M., 2002, “Privileged Interactional Interpretations”, Journal of Pragmatics, 34: 1003–1044.
  • –––, 2016, “Revisiting the Typology of Pragmatic Interpretations”, Intercultural Pragmatics, 13: 1–35.
  • Arundale, R. B., 1999, “An Alternative Model and Ideology of Communication for an Alternative to Politeness Theory”, Pragmatics, 9: 119–153.
  • –––, 2010, “Constituting Face in Conversation: Face, Facework, and Interactional Achievement”, Journal of Pragmatics, 42: 2078–2105.
  • –––, 2013, “Conceptualizing ‘Interaction’ in Interpersonal Pragmatics: Implications for Understanding and Research”, Journal of Pragmatics, 58: 12–26.
  • Asher, N. & A. Lascarides, 1995, “Lexical Disambiguation in a Discourse Context”, Journal of Semantics, 12: 69–108.
  • –––, 2003, Logics of Conversation, Cambridge: Cambridge University Press.
  • Bach, K., 1984, “Default Reasoning: Jumping to Conclusions and Knowing When to Think Twice”, Pacific Philosophical Quarterly, 65: 37–58.
  • –––, 1987, Thought and Reference, Oxford: Clarendon Press.
  • –––, 1994, “Semantic Slack: What Is Said and More”, in Foundations of Speech Act Theory: Philosophical and Linguistic Perspectives, S. L. Tsohatzidis (ed.), London: Routledge, 267–291.
  • –––, 1995, “Remark and Reply. Standardization vs. Conventionalization”, Linguistics and Philosophy, 18: 677–686.
  • –––, 1998, “Postscript (1995): Standardization Revisited”, in Pragmatics: Critical Concepts (Volume 4), A. Kasher (ed.), London: Routledge, 712–722.
  • –––, 2007, “Regressions in Pragmatics (and Semantics)”, in Pragmatics, N. Burton-Roberts (ed.), Basingstoke: Palgrave Macmillan, 24–44.
  • Bezuidenhout, A. L. & R. K. Morris, 2004, “Implicature, Relevance and Default Pragmatic Inference”, in Experimental Pragmatics, I. A. Noveck & D. Sperber (eds.), Basingstoke: Palgrave Macmillan, 257–282.
  • Blutner, R., 2000, “Some Aspects of Optimality in Natural Language Interpretation”, Journal of Semantics, 17: 189–216.
  • Blutner, R., & H. Zeevat, 2004, “Editors’ Introduction: Pragmatics in Optimality Theory”, in Optimality Theory and Pragmatics, R. Blutner & H. Zeevat (eds.), Basingstoke: Palgrave Macmillan, 1–24.
  • Boguraev, B. & J. Pustejovsky, 1990, “Lexical Ambiguity and the Role of Knowledge Representation in Lexicon Design”, Proceedings of the 13th International Conference on Computational Linguistics COLING ’90, Helsinki, 36–41.
  • Borg, E., 2004, Minimal Semantics, Oxford: Clarendon Press.
  • –––, 2012, Pursuing Meaning, Oxford: Oxford University Press.
  • –––, 2019, “Explanatory Roles for Minimal Content”, Noûs, 53: 513–539.
  • Breheny, R., N. Katsos & J. Williams, 2006, “Are Generalised Scalar Implicatures Generated by Default? An On-line Investigation into the Role of Context in Generating Pragmatic Inferences”, Cognition, 100: 434–463.
  • Bultinck, B., 2005, Numerous Meanings: The Meaning of English Cardinals and the Legacy of Paul Grice, Amsterdam: Elsevier.
  • Cappelen, H. & J. Dever, 2016, Context and Communication, Oxford: Oxford University Press.
  • Carson, T., 2010, Lying and Deception: Theory and Practice, Oxford: Oxford University Press.
  • Carston, R., 1988, “Implicature, Explicature, and Truth-Theoretic Semantics”, in Mental Representations: The Interface Between Language and Reality, R. M. Kempson (ed.), Cambridge: Cambridge University Press, 155–181.
  • –––, 2002, Thoughts and Utterances: The Pragmatics of Explicit Communication, Oxford: Blackwell.
  • Chemla, E., V. Homer & D. Rothschild, 2011, “Modularity and Intuitions in Formal Semantics: The Case of Polarity Items”, Linguistics and Philosophy, 34: 537–570.
  • Chierchia, G., 2004, “Scalar Implicatures, Polarity Phenomena, and the Syntax/Pragmatics Interface”, in Structures and Beyond: The Cartography of Syntactic Structures (Volume 3), A. Belletti (ed.), Oxford: Oxford University Press, 39–103.
  • Coecke, B., M. Sadrzadeh & S. Clarke, 2010, “Mathematical Foundations for a Compositional Distributional Model of Meaning”, Linguistic Analysis, 36: 345–384.
  • Cummins, C. & N. Katsos (eds.), 2019, The Oxford Handbook of Experimental Semantics and Pragmatics, Oxford: Oxford University Press.
  • Elder, C.-H., 2021, “Speaker Meaning, Commitment and Accountability” in The Cambridge Handbook of Sociopragmatics, M. Haugh, D. Z. Kádár & M. Terkourafi (eds.), Cambridge: Cambridge University Press, 48–68.
  • Elder, C.-H. & M. Haugh, 2018, “The Interactional Achievement of Speaker Meaning: Toward a Formal Account of Conversational Inference”, Intercultural Pragmatics, 15: 593–625.
  • Elder, C.-H. & K. M. Jaszczolt, 2016, “Towards a Pragmatic Category of Conditionals”, Journal of Pragmatics, 98: 36–53.
  • Firth, J. R., 1957. Papers in Linguistics 1934–53, Oxford: Oxford University Press.
  • Frege, G., 1893, Grundgesetze der Arithmetik (Volume 1), references to the reprint in The Frege Reader, M. Beaney (ed.), 1997, Oxford: Blackwell, 84–191..
  • Gazdar, G., E. Klein, G. Pullum, & I. Sag, 1985, Generalized Phrase Structure Grammar, Oxford: B. Blackwell.
  • Geurts, B., 2007, “Really Fucking Brilliant”, Theoretical Linguistics, 33: 209–214.
  • –––, 2009, “Scalar Implicature and Local Pragmatics”, Mind and Language, 24: 51–79.
  • –––, 2010, Quantity Implicatures, Cambridge: Cambridge University Press.
  • –––, 2019, “Communication as Commitment Sharing: Speech Acts, Implicatures, Common Ground”, Theoretical Linguistics, 45: 1–30.
  • Giora, R. 2003. On Our Mind: Salience, Context, and Figurative Language, Oxford: Oxford University Press.
  • ––– & S. Givoni, 2015, “Defaultness Reigns: The Case of Sarcasm”, Metaphor and Symbol, 30: 290–313.
  • Goldberg, S. C., 2011, “Putting the Norm of Assertion to Work: The Case of Testimony”, in Assertion: New Philosophical Essays, J. Brown & H. Cappelen (eds.), Oxford: Oxford University Press, 175–195.
  • –––, 2015. Assertion: On the Philosophical Significance of Assertoric Speech, Oxford: Oxford University Press.
  • Grice, H. P., 1975, “Logic and Conversation”, in Syntax and Semantics (Volume 3), P. Cole & J. L. Morgan (eds.), New York: Academic Press; references to the reprint in H. P. Grice, 1989, Studies in the Way of Words, Cambridge, Mass.: Harvard University Press, 22–40.
  • Gundel, J. K., N. Hedberg & R. Zacharski, 1993, “Cognitive Status and the Form of Referring Expressions in Discourse”, Language, 69: 274–307.
  • Harris, Z., 1954, “Distributional Structure”, Word, 10: 146–162.
  • Haugh, M., 2007. “The Co-constitution of Politeness Implicature in Conversation”, Journal of Pragmatics, 39: 84–110.
  • –––, 2008, “The Place of Intention in the Interactional Achievement of Implicature”, in Intention, Common Ground and the Egocentric Speaker-Hearer, I. Kecskes & J. Mey (eds.), Berlin: Mouton de Gruyter, 45–85.
  • –––, 2011, “Practices and Defaults in Interpreting Disjunction”, in Salience and Defaults in Utterance Processing, K. M. Jaszczolt & K. Allan (eds.), Berlin: De Gruyter Mouton, 189–225.
  • –––, 2013, “Speaker Meaning and Accountability in Interaction”, Journal of Pragmatics 48: 41–56.
  • Haugh, M., & K. M. Jaszczolt, 2012, “Speaker Intentions and Intentionality” in The Cambridge Handbook of Pragmatics, K. Allan & K. M. Jaszczolt (eds.), Cambridge: Cambridge University Press, 87–112.
  • Heffer, C., 2020, All Bullshit and Lies? Insincerity, Irresponsibility, and the Judgment of Untruthfulness, Oxford: Oxford University Press.
  • Horn, L. R., 1984, “Toward a New Taxonomy for Pragmatic Inference: Q-based and R-based Implicature”, in Georgetown University Round Table on Languages and Linguistics 1984, D. Schiffrin (ed.), Washington, D.C.: Georgetown University Press, 11–42.
  • –––, 1992, “The Said and the Unsaid”, Ohio State University Working Papers in Linguistics, 40 (SALT II Proceedings): 163–192.
  • –––, 1988, “Pragmatic Theory”, in Linguistics: The Cambridge Survey, Vol. 1. Linguistic Theory: Foundations, F. J. Newmeyer (ed.), Cambridge: Cambridge University Press, 113–145.
  • –––, 2004, “Implicature”, in The Handbook of Pragmatics, L. R. Horn & G. Ward (eds.), Oxford: Blackwell, 3–28.
  • –––, 2006, “The Border Wars: A Neo-Gricean Perspective”, in Where Semantics Meets Pragmatics, K. von Heusinger and K. Turner (eds.), Oxford: Elsevier, 21–48.
  • –––, 2012, “Implying and Inferring”, in The Cambridge Handbook of Pragmatics, K. Allan & K. M. Jaszczolt (eds.), Cambridge: Cambridge University Press, 69–86.
  • Jaszczolt, K. M., 1999, Discourse, Beliefs, and Intentions: Semantic Defaults and Propositional Attitude Ascription, Oxford: Elsevier Science.
  • –––, 2002, Semantics and Pragmatics: Meaning in Language and Discourse, London: Longman.
  • –––, 2005, Default Semantics: Foundations of a Compositional Theory of Acts of Communication, Oxford: Oxford University Press.
  • –––, 2008, “Psychological Explanations in Gricean Pragmatics and Frege’s Legacy”, in Intention, Common Ground and the Egocentric Speaker-Hearer, I. Kecskes & J. Mey, (eds.), Berlin: Mouton de Gruyter, 9–44.
  • –––, 2009a, “Cancellability and the Primary/Secondary Meaning Distinction”, Intercultural Pragmatics, 6: 259–289.
  • –––, 2009b, Representing Time: An Essay on Temporality as Modality. Oxford: Oxford University Press.
  • –––, 2010, “Default Semantics”, in The Oxford Handbook of Linguistic Analysis, B. Heine & H. Narrog (eds.), Oxford: Oxford University Press, 193–221.
  • –––, 2012, “‘Pragmaticising’ Kaplan: Flexible inferential bases and fluid characters”, Australian Journal of Linguistics, 32: 209–237.
  • –––, 2016a, Meaning in Linguistic Interaction: Semantics, Metasemantics, Philosophy of Language, Oxford: Oxford University Press.
  • –––, 2016b, “On Unimaginative Imagination and Conventional Conventions: Response to Lepore and Stone”, Polish Journal of Philosophy, 10: 89–98.
  • –––, 2021, “Default Semantics”, in Oxford Bibliographies in Linguistics, M. Aronoff (ed.), New York: Oxford University Press (online).
  • –––, 2022, “Metasemantics and Metapragmatics: Philosophical Foundations of Meaning”, in The Cambridge Handbook of the Philosophy of Language, P. Stalmaszczyk (ed.), Cambridge: Cambridge University Press, 139–156.
  • Kamp, H. & U. Reyle, 1993, From Discourse to Logic: Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory, Dordrecht: Kluwer.
  • Kaplan, D., 1989, “Demonstratives: An Essay on the Semantics, Logic, Metaphysics, and Epistemology of Demonstratives and Other Indexicals”, in Themes from Kaplan, J. Almog, J. Perry, & H. Wettstein (eds.), New York: Oxford University Press, 481–563.
  • Koenig, J.-P., 1993, “Scalar Predicates and Negation: Punctual Semantics and Interval Interpretations”, Chicago Linguistic Society, 27(2): The Parasession on Negation, 140–155.
  • Landman, F., 2000, Events and Plurality, Dordrecht: Kluwer.
  • Lascarides, A., T. Briscoe, N. Asher & A. Copestake, 1996, “Order Independent and Persistent Type Default Unification”, Linguistics & Philosophy, 19: 1–89.
  • Lascarides, A., & A. Copestake, 1998, “Pragmatics and Word Meaning”, Journal of Linguistics, 34: 387–414.
  • Lepore, E. & M. Stone, 2016, Imagination and Convention: Distinguishing Grammar and Inference in Language, Oxford: Oxford University Press.
  • Levinson, S. C., 1995, “Three Levels of Meaning”, in Grammar and Meaning. Essays in Honour of Sir John Lyons, F. R. Palmer (ed.), Cambridge: Cambridge University Press, 90–115.
  • –––, 2000, Presumptive Meanings: The Theory of Generalized Conversational Implicature, Cambridge, Mass.: MIT Press.
  • Lewis, D., 1979, “Scorekeeping in a Language Game”, Journal of Philosophical Logic, 8: 339–359.
  • Liang, P. & C. Potts, 2015, “Bringing Machine Learning and Compositional Semantics together”, The Annual Review of Linguistics, 1: 355–376.
  • Macagno, F., 2022, “Ignoring Qualifications as a Pragmatic Fallacy: Enrichments and Their Use for Manipulating Commitments”, Languages, 7(13). doi:10.3390/languages7010013
  • Meibauer, J., 2014, Lying at the Semantics-Pragmatics Interface, Berlin: Mouton de Gruyter.
  • Musolino, J., 2004, “The Semantics and Acquisition of Number Words: Integrating Linguistic and Developmental Perspectives”, Cognition, 93: 1–41.
  • Noveck, I. A., 2001, “When Children are More Logical than Adults: Experimental Investigations of Scalar Implicature”, Cognition, 78: 165–188.
  • –––, 2004, “Pragmatic Inferences Related to Logical Terms”, in Experimental Pragmatics, I. A. Noveck & D. Sperber (eds.), Basingstoke: Palgrave Macmillan, 301–321.
  • –––, 2018, Experimental Pragmatics, Cambridge: Cambridge University Press.
  • Noveck, I. A., & D. Sperber (eds.), 2004, Experimental Pragmatics, Basingstoke: Palgrave Macmillan.
  • Papafragou, A. & J. Musolino, 2003, “Scalar Implicatures: Experiments at the Semantics-Pragmatics Interface”, Cognition, 86: 253–282.
  • Parikh, P., 2010, Language and Equilibrium, Cambridge, MA: MIT Press.
  • Pelletier, F. J. & R. Elio, 2005, “The Case of Psychologism in Default and Inheritance Reasoning”, Synthese, 146: 7–35.
  • Potts, C., 2005, The Logic of Conventional Implicatures, Oxford: Oxford University Press.
  • –––, 2007, “The Expressive Dimension”, Theoretical Linguistics, 33: 165–198.
  • –––, 2012, “Conventional Implicature and Expressive Content”, in Semantics: An International Handbook of Natural Language Meaning, vol. 2, C. Maienborn, K. von Heusinger and P. Portner (eds.), Berlin: De Gruyter Mouton, 2516–2535.
  • –––, 2015, “Presupposition and Implicature”, in The Handbook of Contemporary Semantic Theory, S. Lappin & C. Fox (eds.), Oxford: Wiley-Blackwell, 168–202.
  • Rayo, A., 2013, “A Plea for Semantic Localism”, Noûs, 47: 647–679.
  • Recanati, F., 2002, “Does Linguistic Communication Rest on Inference?”, Mind and Language, 17: 105–126.
  • –––, 2004, Literal Meaning, Cambridge: Cambridge University Press.
  • –––, 2010, Truth-Conditional Pragmatics, Oxford: Clarenson Press.
  • –––, 2012, “Compositionality, Flexibility, and Context Dependence”, in The Oxford Handbook of Compositionality, M. Werning, W. Hinzen & E. Machery (eds.), Oxford: Oxford University Press, 174–191.
  • Reiter, R., 1980, “A Logic for Default Reasoning”, Artificial Intelligence, 13: 81–132.
  • Richard, M., 2008, When Truth Gives Out, Oxford: Oxford University Press.
  • Roberts, C., 2004, “Context in Dynamic Interpretation”, in The Handbook of Pragmatics, L. Horn & G. Ward (eds.), Oxford: Blackwell, 197–220.
  • Sacks, H., E. A. Schegloff, & G. Jefferson, 1974, “A Simplest Systematics for the Organization of Turn-Taking for Conversation”, Language, 50: 696–735.
  • van der Sandt, R. A., 1992, “Presupposition Projection as Anaphora Resolution”, Journal of Semantics, 9: 333–377.
  • –––, 2012, “Presupposition and Accommodation in Discourse”, in The Cambridge Handbook of Pragmatics, K. Allan & K. M. Jaszczolt (eds.), Cambridge: Cambridge University Press, 329–350.
  • Saul, J. M., 2002, “What Is Said and Psychological Reality; Grice’s Project and Relevance Theorists’ Criticisms”, Linguistics and Philosophy 25, 347–372.
  • –––, 2012, Lying, Misleading, and What Is Said: An Exploration in Philosophy of Language and Ethics, Oxford: Oxford University Press.
  • Sperber, D. & D. Wilson, 1986, Relevance: Communication and Cognition, Oxford: Blackwell; reprinted in 1995, second edition.
  • Stokke, A., 2018, Lying and Insincerity, Oxford: Oxford University Press.
  • Stone, M., 2016, “Semantics and Computation”, in The Cambridge Handbook of Formal Semantics, M. Aloni & P. Dekker (eds.), Cambridge: Cambridge University Press, 775 –800.
  • Thomason, R. H., 1997, “Nonmonotonicity in Linguistics”, in Handbook of Logic and Language, J. van Benthem & A. ter Meulen (eds.), Oxford: Elsevier Science, 777–831.
  • Tonhauser, J., D. Beaver, C. Roberts, & M. Simons, 2013, “Towards a Taxonomy of Projective Content”, Language, 89(1): 66–109.
  • Veltman, F., 1996, “Defaults in Update Semantics”, Journal of Philosophical Logic, 25: 221–261.
  • Zeevat, H., 2000, “Demonstratives in Discourse”, Journal of Semantics, 16: 279–313.
  • –––, 2004, “Particles: Presupposition Triggers, Context Markers or Speech Act Markers”, in Optimality Theory and Pragmatics, R. Blutner & H. Zeevat (eds.), Basingstoke: Palgrave Macmillan, 91–111.

Other Internet Resources


This entry draws on some sections of my ‘Default Interpretations’, published in Handbook of Pragmatics Online, vol. 10, 2006, ed. by J.-O. Ostman and J. Verschueren. I owe thanks to John Benjamins Publishing Co., Amsterdam, for permission to use the material.

Copyright © 2022 by
Katarzyna M. Jaszczolt <>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free