# Substructural Logics

*First published Tue Jul 4, 2000; substantive revision Thu Aug 15, 2024*

*Substructural logics* are non-classical logics notable for the
absence of one or more *structural rules* present in classical
logic. Initial interest in substructural logics developed
independently in the second half of the twentieth century, through
considerations from philosophy (relevant logics), from linguistics
(the Lambek calculus) and from the mathematics of proof theory (linear
logic). Since the 1990s, these independent lines of inquiry have been
understood to be different aspects of a unified field, and techniques
from substructural logics are useful in the study of traditional
logics such as classical and intuitionistic logic. This article
provides a brief overview of the field of substructural logic. For a
more detailed introduction, complete with theorems, proofs and
examples, the reader can consult the books and articles in the
Bibliography.

- 1. Structural Rules
- 2. Different Logics
- 3. Proof Theory
- 4. Model Theory
- 5. Quantifiers
- 6. Logics without Cut or Identity
- Bibliography
- Academic Tools
- Other Internet Resources
- Related Entries

## 1. Structural Rules

Substructural logics are a family of logical systems characterised in
terms of their *structural rules*. A structural rule is a
logical rule that applies indiscriminately to all propositions without
regard to their form. The familiar inference rule of *modus
ponens* (from \(p\to q\) and \(p\) to infer \(q\)) is not a
structural rule because the first premise of the rule must be a
conditional. This is a rule not about the inferential behaviour of all
propositions, but a rule about conditionals. The syllogism
*Barbara* (from *all Fs are G* and *all Gs are H*
to infer *all Fs are H*) is similarly not a structural rule: it
is a rule specifically applying to universal claims. *Modus
ponens*, *Barbara* and other familiar rules such as the
*Law of Non-Contradiction* are *operational rules*
governing or describing the logical features of particular concepts,
such as the conditional, universal generalisation, negation, etc. Not
all logical rules are operational rules. Some rules govern the ambient
behaviour of logical consequence *as such*, and these are the
structural rules.

The *Cut* rule can be stated like this: if from some premises
(call them \(X\)) we can prove the conclusion \(A\), and if from \(A\)
together with more premises (call them \(Y\)) we can prove \(B\) then
we can prove the conclusion \(B\) from the premises \(X\) and \(Y\)
taken together.
\[ \cfrac{X\vdash A\qquad Y,A\vdash B}{X,Y\vdash B} \]
Here, there is no restriction on the form
of any of the particular judgements featuring in the rule. The
particular premises and conclusions are completely general, so
*Cut* is a structural rule.

In this example, the structural rule is stated in terms of what one
can *prove*, and one way to motivate structural rules like this
is by way of considerations of how proofs may be formed and combined.
After all, the proof from \(X\) together with \(Y\) to the conclusion
\(B\) may be constructed by first proving \(A\) from \(X\) and then
using this \(A\) together with \(Y\) to prove \(B\).

Structural rules can also be motivated by characterising logical
validity by way of *models*. If an argument from some premises
to a conclusion is understood to be *valid* when there is no
model of the premises that is not also a model of the conclusion (such
a model would be a *counterexample* to the argument), then the
*Cut* rule may likewise be motivated: if every model of every
member of \(X\) is a model of \(A\), and every model of every member
\(Y\) and of \(A\) is also a model of \(B\), then every model of each
member of \(X\) and each member of of \(Y\) is (by virtue of being a
model of every member of \(X\)) a model of \(A\), and so (since it is
also a model of every member of \(Y\)), it is a model of \(B\). In
this reasoning, we did not have to appeal to the truth conditions for
any formulas in models: the rule is not beholden to any particular
scheme concerning how formulas are evaluated. Provided that validity
is defined in terms of preservation of “holding in a
model” (however that is to be understood), the logical
consequence relation satisfies the *Cut* rule.

So, regardless of whether we think of validity in terms of proofs or in terms of models, the structural rules are facts about validity that hold irrespective of the logical form of the components of the arguments. They arise from the general definition of validity and its structural features, rather than the rules governing particular connectives, quantifiers or other items of vocabulary.

Nonetheless, different logical systems differ in the rules governing particular logical concepts. Intuitionistic logic famously differs from classical logic in its treatment of negation. The structural rules are an essential part of any explanation of the behaviour of individual logical concepts, because we use the ambient context of logical consequence to characterise the behaviour of those concepts. For example, one way to characterise the behaviour of the conditional \(p\rightarrow q\) (read as “if \(p\) then \(q\)”) can take this form:

\[ X, p \vdash q \text{ if and only if } X \vdash p \rightarrow q \]
This says that \(q\) follows from \(X\) together with \(p\) just when
\(p \rightarrow q\) follows from \(X\) alone. The validity of the
transition from \(p\) to \(q\) (given \(X)\) is recorded by the
conditional \(p \rightarrow q\). We characterise validity concerning
conditionals in terms of a statement about validity involving the
*components* of those conditionals.

(This connection between the conditional and consequence is called
*residuation* by analogy with the case in mathematics. Consider
the connection between addition and substraction. \(a + b = c\) if and
only if \(a = c - b\). The resulting \(a\) (which is \(c - b)\) is the
*residual*, what is left of \(c\) when \(b\) is taken away.
Another name for this connection is the *deduction
theorem*.)

The residuation equivalence between \(X,p\vdash q\) and \(X\vdash p\to
q\) is not the only way to characterise the behaviour of the
conditional. Sometimes it is helpful to study rules in which the
inferential behaviour of a connective is described in terms of the
conditions under which you can infer a sentence involving that
connective (here, the left-to-right direction from \(X,p\vdash q\) to
\(X\vdash p\to q\) answers that question) and also describes what one
can infer *from* a sentence involving that connective. The
residuation condition for \(p\to q\) only answers this question
implicitly, by way of an appeal to the *Cut* rule, and the
assumption that \(p\to q\vdash p\to q\). For then we can reason as
follows:
\[ \cfrac{ \cfrac{ \begin{matrix}
\\
{Y\vdash p}
\end{matrix} \quad \cfrac{p\rightarrow q\vdash p\rightarrow q}{p\rightarrow q,p\vdash q} } {p\rightarrow q,Y\vdash q} \quad \begin{matrix}
\\
{X,q\vdash C}
\end{matrix} } {X,p\to q,Y\vdash C} \]
So, from the two premises \(Y\vdash p\) and
\(X,q\vdash C\) we can conclude \(X,p\to q,Y\vdash C\). This is a
*left* sequent rule for the conditional, presenting conditions
under which it is appropriate to infer some conclusion \(C\) from
\(p\to q\) used as a premise. This is sometimes used as an alternative
to the right-to-left half of the residuation equivalence (from
\(X\vdash p\to q\) to \(X,p\vdash q\) since it, like the left-to-right
half of the residuation eqiuvalence characterises the behaviour of the
conditional in terms of the inferential behaviour of its
components.

Whether we use the residuation rules or the left/right sequent rules
for the conditional, the connections between consequence and the
conditional involve a number of distinct concepts. Not only is there
the turnstile, for logical validity, and the conditional, encoding
consequence inside the language of propositions, there is also the
*comma*, indicating the *combination* of premises. We
have read “\(X, p \vdash q\)” as “\(q\) follows from
\(X\) *together with* \(p\)”. To combine premises is to
have a way to take them *together*. But how can we take them
together? It turns out that there are different ways to do so, which
have different logical properties, and so, we have different
substructural logics with different purely structural rules. The
behaviour of premise combination may vary, and as a result, the
behaviour of the conditional and other logical connectives vary as a
result. In this introduction we will consider some examples of this
phenomenon.

### 1.1 Weakening

It is one thing for \(p\) to be *true*. It is another for the
conditional \(q \rightarrow p\) to be true. Yet, if
‘\(\rightarrow\)’ is a material conditional, \(q
\rightarrow p\) follows from \(p\). For many different reasons, we may
wish to understand how a conditional might work in the
*absence* of this inference. This is tied to the behaviour of
premise combination, as can be shown by this demonstration.

From the axiomatic \(p \vdash p\) (anything follows from
*itself*) we deduce that \(p\) follows from \(p\) together with
\(q\), and then by residuation, \(p \vdash q \rightarrow p\). If we
wish to reject the inference from \(p\) to \(q \rightarrow p\), then
we either reject residuation, or reject the identity axiom at the
start of the proof, or we reject the first step of the proof. It is
illuminating to consider what is involved in this last option. Here,
we are to deny that \(p\) follows from \(p, q\). In general, we are to
reject the inference rule that has this form:

This is the rule of *weakening*: it steps from a stronger
statement, that \(A\) follows from \(X\) to a possibly weaker one,
that \(A\) follows from \(X\) together with \(Y\). In this rule, the
members of \(X\), and \(Y\) and the conclusion \(A\) are arbitrary, so
it is a purely *structural* rule.

People have offered different reasons for rejecting the rule of
weakening, depending on the interpretation of consequence and premise
combination. One of the early motivating examples comes from a concern
for relevance. If the logic is *relevant* (if to say that \(p\)
entails \(q\) is true is to say, at least, that \(q\) truly
*depends* on \(p)\) then weakening need not hold. We may indeed
have \(A\) following from \(X\), without \(A\) following from \(X,Y\),
for it need not be the case that \(A\) depends on \(X\) and \(Y\)
together.

In relevant logics the rule of weakening fails on the *other*
side of the comma too, in that we also wish *this* argument to
be invalid:

Again, \(q\) may follow from \(q\), but this does not mean that it
follows from \(p\) *together with* \(q\), provided that
“together with” is meant in an appropriately strong sense.
So, in relevant logics, the inference from an arbitrary premise to a
logical truth such as \(q \rightarrow q\) may well fail.

### 1.2 Commutativity

If the mode of premise combination is commutative (if anything which follows from \(X, Y\) also follows from \(Y, X)\) then we can reason as follows, using just the identity axiom and residuation:

\[ \cfrac{p \rightarrow q \vdash p \rightarrow q} { \cfrac{p \rightarrow q, p \vdash q} { \cfrac{p, p \rightarrow q \vdash q} {p \vdash(p \rightarrow q) \rightarrow q} } } \]In the absence of commutativity of premise combination, this proof is not available. This is another simple example of the connection between the behaviour of premise combination and that of deductions involving the conditional.

There are many kinds of conditional for which this inference fails. If
a conditional has *modal* force (if it expresses a kind of
entailment, in which \(p \rightarrow q\) is true when in every related
circumstance in which \(p\) holds, \(q\) does too), and if
“\(\vdash\)” expresses local consequence \((p\vdash q\) if
and only if any model, at any circumstance at which \(p\) holds, so
does \(q)\) it fails. It may be true that Greg is a logician \((p)\)
and it is true that Greg’s being a logician entails Greg’s
being a philosopher \((p \rightarrow q\) – in related
circunstances in which Greg is a logician, he is a philosopher) but
this does not *entail* that Greg is a philosopher. (There are
many circumstances in which the entailment \((p \rightarrow q)\) is
true but \(q\) is not.) So we have a circumstance in which \(p\) is
true but \((p \rightarrow q) \rightarrow q\) is not. The argument is
invalid.

This counterexample can also be understood in terms of behaviour of
premise combination. Here when we say \(X,A \vdash B\) is true, we are
not just saying that \(B\) holds in any circumstance in which \(X\)
and \(A\) both hold. If we are after a genuine *entailment*
*A \(\rightarrow\) B*, then we want \(B\) to be true in any
(related) circumstance in which \(A\) is true. So, \(X,A \vdash B\)
says that in *any* possibility in which \(A\) is true, so is
\(B\). These possibilities might not satisfy all of \(X\). (In
classical theories of entailment, the possibilities are those in which
all that is taken as *necessary* in \(X\) are true.)

If premise combination is not commutative, then residuation can go in
*two* ways. In addition to the residuation condition giving the
behaviour of \(\rightarrow\), we may wish to define a new arrow
\(\leftarrow\) as follows:

For the left-to-right arrow we have *modus ponens* in this
direction:

For the right-to-left arrow, *modus ponens* is provable with
the premises in the opposite order:

This is a characteristic of substructural logics. When we pay
attention to what happens when we don’t have the full complement
of structural rules, then new possibilities open up. We uncover
*two* conditionals underneath what was previously one (in
intuitionistic or classical logic).

In the next section we will see another example motivating non-commutative premise combination and these two different conditionals.

### 1.3 Associativity

Here is another kind of structural assumption present in proofs. So
far, you might have assumed that the premises in a sequent are ordered
in a list, and that the premise list \(p,q,r\) could be equally
understood as the list \(p,q\), to which \(r\) has been added on the
right, or the list \(q,r\) to which \(p\) has been added on the left.
If so, you take it that the premise structures \((p,q),r\) and
\(p,(q,r)\) are identical, and that premise combination (represented
here by the comma) is *associative*. The associativity of
premise combination is another structural rule which has an impact on
what inferences are valid.

To get a clear picture of how this follows, it will help to return to
the formulation of the *Cut* rule, to see how we might clearly
state it without prejudging whether premise combination is
associative. Its more general form goes like this:

Whenever we can prove \(A\) from \(X\), and we can prove \(B\) from some assumptions including \(A\) somewhere, we can prove \(B\) from those assumptions where we replace those appeals to \(A\) (in those places) with appeals to \(X\) instead. Here, there is no implicit reordering or reshuffling of how premises are combined. We surgically replace the appeal to \(A\) with the appeal to whetever it is that we used to justify \(A\).

In a similar way, the conditional rules can be understood without
appeal to associativity: the equivalence between \(X,p\vdash q\) and
\(X\vdash p\rightarrow q\) marks the fact that the comma can be read
as *application*: the assumptions \(X\) deliver a conditional
if and only if whenever when we apply those assumptions to the
antecedent, this delivers the consequent. There is nothing in the
notion of application *per se* that assumes associativity.

With this understanding, the *left* rule for the conditional
has the following shape:

This says that if we appeal to \(q\) as a part of some justification of \(C\) and if we have some justification \(Y\) of another claim \(p\), then instead of directly appealing to \(q\) in that justification of \(C\) we can appeal to \(p\to q\) applied to \(Y\). Again, this formulation is careful to respect the order of application, and no implicit reshuffling has taken place.

With all that understood, we can see the following proof, in which we first apply the left rule twice, introducing \(r\to p\) and \(p\to q\) to show that \(p\to q,(r\to p,r)\) together justify \(q\). To use this to conclude something about \(r\to q\) we would like to discharge the assumption \(r\), but to do this, we must rearrange the order in which the premises are combined.

\[ \cfrac{ \begin{matrix} \\ q\vdash q \end{matrix} \qquad \begin{matrix} p\vdash p\quad r\vdash r \\ \hline r \rightarrow p, r \vdash p \end{matrix}} { \cfrac{p \rightarrow q, (r \rightarrow p, r) \vdash q} { \cfrac{(p \rightarrow q, r \rightarrow p), r \vdash q} { \cfrac{ p \rightarrow q, r \rightarrow p \vdash r \rightarrow q } {p \rightarrow q \vdash(r \rightarrow p) \rightarrow(r \rightarrow q)} } } } \]So, this proof appeals to the associativity of premise combination, while contraction and weakening are not used. In the absence of associativity, we should not expect the inference from \(p\to q\) to \((r\to p)\to(r\to q)\) to be valid.

### 1.4 Contraction

A final important example is the rule of *contraction* which
dictates how premises may be reused. Contraction is crucial in the
inference of \(p \rightarrow q\) from \(p \rightarrow (p \rightarrow
q)\)

In this proof, at the second-to-last step we use the principle that if
we can derive some conclusion by way of applying some assumption
*twice* (here, \(p\)), it can be justified when we apply that
assumption *once* instead. (Note: in this proof on the third
line the assumptions are bracketed because this proof does not appeal
to the associativity of premise combination, so if we wish to
distinguish \((A,B),C\) from \(A,(B,C\), we may do so.)

These different examples can give a sense of the role of structural rules in determining the behaviour of a concept such as conditionality. Not only do structural rules influence the conditional, but they also have their effects on other connectives, such as conjunction and disjunction (as we shall see below) and negation (Dunn 1993; Restall 2000).

### 1.5 Structure on the right of the turnstile

Gentzen’s pioneering work on the sequent calculus
(Gentzen 1935)
shows that the difference between *classical* logic and
*intuitionistic* logic can also be understood as a difference
not of the behaviour of one logical connective or
another—whether negation, disjunction or the
conditional—but as a matter of the ambient structure of sequents
in which those connectives are defined, and the structural rules
satisfied in that setting. Gentzen’s sequent calculus
*LJ* for intuitionistic logic is formulated in terms of
sequents of the form \(X \vdash A\), in which we have a collection of
antecedents and a single consequent. These are the kinds of sequents
we have focussed on so far. Gentzen’s *LK* calculus for
classical logic formulates sequents in a more general form

where we allow a number of formulas on the right of the sequent, just
as we do on the left, so both \(X\) and \(Y\) are collections of
formulas. The intended interpretation is that from *all* of the
\(X\) some of the \(Y\) follow In other words, we cannot have all of
the \(X\) and *none* of the \(Y\) obtaining. (See the entries
on
Proof Theory (Section 2)
and
Propositional Logic (Section 2.3.2)
for more on the sequent calculus.)

Allowing sequents with multiple consequents and translating the familiar connective rules into this wider context, we are able to derive classical tautologies. For example, the derivation

\[ \cfrac{p \vdash p} { \cfrac{p \vdash q, p} {\vdash p \rightarrow q, p} } \]
shows that either \(p \rightarrow q\) or \(p\) must hold. This is
classically valid (if \(p\) fails, \(p\) is *false*, and
conditionals with false antecedents are true), but invalid in
intuitionistic logic. Similarly, the intuitionist derivation of the
law of non-contradiction can be naturally ‘dualised’ into
a classical proof of the law of the excluded middle, as follows:
\[ \cfrac{p\vdash p}{\cfrac{p,\neg p\vdash} {p\amp\neg p\vdash}} \qquad\qquad \cfrac{p\vdash p}{\cfrac{\vdash p,\neg p}{\vdash p\lor\neg p}} \]
The difference between classical and intuitionistic
logic, then, can be understood as a difference arising from the kinds
of structural rules permitted, and the kinds of structures appropriate
to use in the analysis of logical consequence, rather than a
difference due to the behaviour of one or more different logical
constants.

### 1.6 Cut and Identity

Each of the structural rules considered here (weakening,
commutativity, associativity, contraction) essentially involves the
behaviour of premise combination, or in the case of classical
sequents, premise and *conclusion* combination. The structural
rule we first saw, the *Cut* rule, in its simplest form, has
nothing to do with combination of premises. In its most stripped-down
form, the *Cut* rule takes this shape:
\[ \cfrac{A\vdash B\qquad B\vdash C}{A\vdash C} \]
which
involves no taking together of formulas on the left or the right of
the sequent at all. The *Cut* rule, at its heart, is not about
the distinctive behaviour of how premises or conclusions are combined,
but about the relationship between what is needed to *derive* a
formula (here, the so-called ‘*Cut*’ formula,
\(B\)) occuring on the *right* of a sequent and then what one
can *do* with that formula, as it occurs on the *left*.
To use the economic metaphor: if we can ‘pay’ \(A\) to
receive \(B\), and we could pay \(B\) to receive \(C\), then our
initial payment of \(A\) should suffice to provide \(C\). Speaking
more abstractly, the *Cut* rule states that a \(B\) on the
right of a sequent is at least as powerful as a \(B\) on the left:
anything we can prove from a \(B\) on the *left* (here, \(C\))
could be proved by anything that entails a \(B\) on the
*right*.

There is one more structural rule which deserves mention. The
*Identity* rule matches formulas on the left and on the right
in the opposite way:
\[ B\vdash B \]
This identity rule tells us that
\(B\) on the left is strong enough to deliver \(B\) on the right. So,
*Cut* and *Identity* as structural rules involve the
relationship between occurrences of formulas in the left and right of
sequents, while the other structural rules govern the ways that
formulas may be manipulated inside the left or the right of
sequents.

## 2. Different Logics

A range of substructural logics have been developed, independently, in the second half of the 20th Century, and into the 21st. These logics can be motivated in different ways.

### 2.1 Relevant Logics

It is natural to think that there is some connection between logical
consequence an relevance. If \(X \vdash A\) holds, then one might hope
that the premises of our deduction (\(X\)) must somehow be
*relevant* to the conclusion \(A\). With this understanding of
relevance in mind, it is natural to restrict premise combination: We
may have \(X \vdash A\) without also having \(X,Y \vdash A\), since
the new material \(Y\) might not be relevant to the deduction of
\(A\). Relevant logics, understood in this way, are
*substructural* because relevance is understood as a property
of logical consequence itself (and the structural rules governing
consequence), rather than a property of any one connective.

In the 1950s, Moh
(1950),
Church
(1951)
and Ackermann
(1956)
all gave accounts of what a ‘relevant’ logic could be.
The ideas have been developed by a stream of workers centred around
Anderson and Belnap, their students Dunn and Meyer, and many others.
The canonical references for the area are Anderson, Belnap and
Dunn’s two-volume *Entailment*
(1975
and
1992).
Other introductions can be found in Read’s *Relevant
Logic*
(1988),
Dunn and Restall’s *Relevance Logic*
(2000),
and Mares’ *Relevant Logic*: *a philosophical
interpretation*
(2004).

### 2.2 Linear Logic and other contraction-free logics

This is not the only way to restrict premise combination. Girard
(1987)
introduced *linear logic* as a model for processes and
resource use. The idea in this account of deduction is that resources
must be used (so premise combination satisfies the relevance
criterion) and they do not extend *indefinitely*. Premises
cannot be *re*-used. So, I might have \(X,X \vdash A\), which
says that I can use \(X\) twice to get \(A\). I might not have \(X
\vdash A\), which says that I can use \(X\) once alone to get \(A\). A
helpful introduction to linear logic is given in Troelstra’s
*Lectures on Linear Logic*
(1992).

There are other formal logics in which the *contraction rule*
(from \(X,X \vdash A\) to \(X \vdash A)\) is absent. Most famous among
these are Łukasiewicz’s many-valued logics. There has been
a sustained interest in logics without contraction because it has been
noticed that the contraction rule features in paradoxical derivations
such as the liar paradox, Curry’s paradox, and other semantic
paradoxes
(Curry 1977,
Geach 1955; see also
Field 2008,
Restall 1994 and
Zardini 2021).

### 2.3 The Lambek Calculus

Independently of either of these traditions, Joachim Lambek considered
mathematical models of language and syntax
(Lambek 1958,
1961). The idea here is that premise combination corresponds
to composition of strings or other linguistic units. Here \(X,X\)
differs in content from \(X\), but in addition, \(X,Y\) differs from
\(Y,X\) Not only does the *number* of premises used count but
so does their *order*. Good introductions to the Lambek
calculus (also called *categorial grammar*) can be found in
books by Moortgat
(1988)
Morrill
(1994),
and Moot and Retoré
(2012).

### 2.4 ST and TS

Each substructural logic considered so far restricts the behaviour of
premise combination in some way, by banning weakening, contraction, or
exchange. Another family of substructural logics keeps all of those
structural rules, but restricts one of the cross-sequent rules:
*Cut* and *identity*. These substructural logics differ
from classical logic either very little, or enormously, depending on
which structural rule is rejected. Classical logic without the
*Cut* rule is, at one level, indistinguishable from classical
logic, because as Gentzen has shown
(1935),
the *Cut* rule is eliminable: any sequent that can be proved
using cut can also be proved without it. So, the substructural variant
of classical logic that goes without the cut rule is, as far as valid
sequents is concerned, classical logic itself.

On the other hand, if we remove the *Identity* rule from
standard presentations of classical logic, the result is a proof
system that is far different. Classical logic without identity has
*no* valid sequents at all. Nothing is provable, because
derivations have nowhere to start.

Since the 2000s, there has been sustained attention on logics in which
the rules of cut or identity are jettisoned for the purposes of
treatment of the various semantic phenomena, including vagueness and
the semantic paradoxes. Zardini
(2008)’s
“A Model for Tolerance” and van Rooij’s
(2011)’s
“Vagueness, Tolerance and Non-Transitive Entailment” each
utilised a cut-free (non-transitive) consequence relation to model
vague predicates satisfying a natural tolerance condition, to the
effect that anything sufficiently \(F\)-close to something that is
\(F\) is also \(F\). If the conditional expressing this tolerance
claim (and the underlying consequence relation) is not transitive,
then the sorites paradox is, in some sense, averted. Cobreros, *et
al*.
(2012)
showed that this logic has simple models in terms of strict and
tolerant semantic evaluation, which will be discussed below in
Section 6,
and Ripley
(2012)
showed that this logic provides the means to model a truth predicate
\(T\) for which \(T\langle A\rangle\) is equivalent to \(A\), and for
which a liar sentence \(\lambda\) (for which \(\lambda=\neg
T\lambda\)) gives rise to inconsistency, but not triviality. We can
derive \(\vdash T\lambda\) and \(\vdash\neg T\lambda\) (and hence
\(T\lambda \vdash \bot\)), by the standard liar-paradoxical reasoning,
but the failure of the cut rule for \(T\)-sentences means that the we
cannot proceed from \(\vdash T\lambda\) and \(T\lambda\vdash \bot\) to
the expected trivialising consequence \(\vdash\bot\). (Alan Weir
(2005)
earlier also utilised a Cut-free logic in the analysis of the
paradoxes arising from a naïve truth predicate, however the
resulting logic is not **ST**, as the usual connective rules for
the conditional and negation are modified, along with the rejection of
*Cut*.)

For reasons that will become manifest in
Section 6,
this logic is called **ST** (for Strict/Tolerant
consequence), and its cousin, **TS** (for Tolerant/Strict
consequence) is the logic of the classical sequent calculus without
the *identity* rule.

## 3. Proof Theory

We have already seen a fragment of one way to present substructural
logics, in terms of proofs. We have used the residuation condition,
which can be understood as including two rules for the conditional,
one to *introduce* a conditional,

and another to *eliminate* it.

Rules like these form the cornerstone of a natural deduction system,
and these systems are available for the wide sweep of substructural
logics. But proof theory can be done in other ways. *Gentzen*
systems operate not by introducing and eliminating connectives, but by
introducing them both on the left and the right of the turnstile of
logical consequence. We keep the introduction rule above, and replace
the elimination rule by one introducing the conditional on the
left:

This rule is more complex, but it has the same effect as the arrow elimination rule: It says that if \(X\) suffices for \(A\), and if you use \(B\) (in some context \(Y)\) to prove \(C\) then you could just as well have used \(A \rightarrow B\) together with \(X\) (in that same context \(Y)\) to prove \(C\), since \(A \rightarrow B\) together with \(X\) gives you \(B\).

Gentzen systems, with their introduction rules on the left and the
right, have very special properties which are useful in studying
logics. Since connectives are always *introduced* in a proof
(read from top to bottom) proofs never *lose* structure. If a
connective does not appear in the conclusion of a proof, it will not
appear in the proof at all, since connectives cannot be
eliminated.

In certain substructural logics, such as linear logic and the Lambek
calculus, and in the fragment of the relevant logic \(\mathbf{R}\)
without disjunction, a Gentzen system can be used to show that the
logic is *decidable*, in that an algorithm can be found to
determine whether or not an argument \(X \vdash A\) is valid. This is
done by searching for proofs of \(X \vdash A\) in a Gentzen system.
Since premises of this conclusion must feature no language not in this
conclusion, and they have no greater complexity (in these systems),
there are only a finite number of possible premises. An algorithm can
check if these satisfy the rules of the system, and proceed to look
for premises for these, or to quit if we hit an axiom. In this way,
decidability of some substructural logics is assured.

However, not all substructural logics are decidable in this sense.
Most famously, the relevant logic \(\mathbf{R}\) is not decidable.
This is partly because its proof theory is more complex than that of
other substructural logics. \(\mathbf{R}\) differs from linear logic
and the Lambek calculus in having a straightforward treatment of
conjunction and disjunction. In particular, conjunction and
disjunction satisfy the rule of *distribution*:

The natural proof of distribution in any proof system uses both
weakening and contraction, so it is not available in the relevant
logic \(\mathbf{R}\), which does not contain weakening. As a result,
proof theories for \(\mathbf{R}\) either contain distribution as a
primitive rule, or contain a second form of premise combination (so
called *extensional* combination, as opposed to the
*intensional* premise combination we have seen) which satisfies
weakening and contraction.

In recent years, a great deal of work has been done on the proof
theory of *classical logic*, inspired and informed by research
into substructural logics. Classical logic has the full complement of
structural rules, and is historically prior to more recent systems of
substructural logics. However, when it comes to attempting to
understand the deep structure of *classical* proof systems (and
in particular, when two derivations that differ in some superficial
syntactic way are *really* different ways to represent the one
underlying ‘proof’) it is enlightening to think of
classical logic as formed by a basic substructural logic, in which
extra structural rules are imposed as additions. In particular, it has
become clear that what distinguishes classical proof from its siblings
is the presence of the structural rules of contraction and weakening
in their complete generality (see, for example,
Bellin *et al.* 2006
and the literature cited therein).

## 4. Model Theory

While the relevant logic \(\mathbf{R}\) has a proof system more
complex than the substructural logics such as linear logic, which lack
distribution of (extensional) conjunction over disjunction, its
*model theory* is altogether more simple. A Routley-Meyer
*model* for the relevant logic \(\mathbf{R}\) is comprised of a
set of *points* \(P\) with a three-place relation \(R\) on
\(P\). A conditional \(A \rightarrow B\) is evaluated at a world as
follows:

\(A \rightarrow B\) is true at \(x\) if and only if for each \(y\) and \(z\) where \(Rxyz\), if \(A\) is true at \(y, B\) is true at \(z\).

An argument is *valid* in a model just when in any point at
which the premises are true, so is the conclusion. The argument \(A
\vdash B \rightarrow B\) is invalid because we may have a point \(x\)
at which \(A\) is true, but at which \(B \rightarrow B\) is not. We
can have \(B \rightarrow B\) fail to be true at \(x\) simply by having
\(Rxyz\) where \(B\) is true at \(y\) but not at \(z\).

The three-place relation \(R\) follows closely the behaviour of the
mode of premise combination in the proof theory for a substructural
logic. For different logics, different conditions can be placed on
\(R\). For example, if premise combination is commutative, we place a
*symmetry* condition on \(R\) like this: \(Rxyz\) if and only
if \(Ryxz\). Ternary relational semantics gives us great facility to
*model* the behaviour of substructural logics. (The extent of
the correspondence between the proof theory and algebra of
substructural logics and the semantics is charted in Dunn’s work
on *Gaggle Theory*
(1991)
and is summarised in Restall’s *Introduction to
Substructural Logics*
(2000).

Furthermore, if conjunction and disjunction satisfy the distribution
axiom mentioned in the previous section, they can be modelled
straightforwardly too: a conjunction is true at a point just when both
conjuncts are true at that point, and a disjunction is true at a point
just when at least one disjunct is true there. For logics, such as
linear logic, *without* the distribution axiom, the semantics
must be more complex, with a different clause for disjunction required
to invalidate the inference of distribution.

It is one thing to use a semantics as a formal device to model a
logic. It is another to use a semantics as an *interpretive*
device to *apply* a logic. The literature on substructural
logics provides us with a number of different ways that the ternary
relational semantics can be applied to describe the logical structure
of some phenomena in which the traditional structural rules do not
apply.

For logics like the Lambek calculus, the interpretation of the semantics is straightforward. We can take the points to be linguistic items, and the ternary relation to be the relation of concatenation (\(Rxyz\) if and only if \(x\) concatenated with \(y\) results in \(z)\). In these models, the structural rules of contraction, weakening and permutation all fail, but premise combination is associative.

The contemporary literature on linguistic classification extends the basic Lambek Calculus with richer forms of combination, in which more syntactic features can be modelled (see Moortgat 1995).

Another application of these models is in the treatment of the
semantics of *function application*. We can think of the points
in a model structure as both *functions* and *data*, and
hold that \(Rxyz\) if and only if \(x\) (considered as a function)
applied to \(y\) (considered as data) is \(z\). Traditional accounts
of functions do not encourage this dual use, since functions are taken
to be of a ‘higher’ than their inputs or outputs (on the
traditional set-theoretic model of functions, a function \(is\) the
set of its *input-output* pairs, and so, it can never take
*itself* as an input, since sets cannot contain themselves as
members). However, systems of functions modelled by the untyped
\(\lambda\)-calculus, for example, allow for self-application. Given
this reading of points in a model, a point is of type \(A \rightarrow
B\) just if whenever it takes inputs of type \(A\), it takes outputs
of type \(B\). The inference rules of this system are then principles
governing types of functions: the sequent

tells us that whenever a function takes \(A\)s to \(B\)s and \(A\)s to \(C\)s, then it takes \(A\)s to things that are both \(B\) and \(C\).

This example gives us a model in which the appropriate substructural
logic is extremely weak. *None* of the usual structural rules
(not even associativity) are satisfied in this model. This example of
a ternary relational model is discussed in
(Restall 2000,
Chapter 11).

For the relevant logic \(\mathbf{R}\) and its interpretation of
natural language conditionals, more work must be done in identifying
what features of reality the formal semantics models. This has been a
matter of some controversy, since not only is the ternary relation
unfamiliar to those whose exposure is primarily to modal logics with a
simpler *binary* accessibility relation between possible
worlds, but also because of the novelty of the treatment of
*negation* in models for relevant logics. It is not our place
to discuss this debate in much detail here, Some of this work is
reported in the article on
relevant logic
in this Encyclopedia, and a book-length treatment of relevant logic
in this light is Mares’ *Relevant Logic*: *a
philosophical interpretation*
(2004).

## 5. Quantifiers

The treatment of quantifiers in models for substructural logics has proved to be quite difficult, but advances have been made in the early 2000s. The difficulty came in what seemed to be a mismatch between the proof theory and model theory for quantifiers. Appropriate axioms or rules for the quantifiers are relatively straightforward. The universal quantifier elimination axiom \[ \forall xA \rightarrow A[t/x] \] states that an instance follows (in the relevant sense) from its universal generalisation. The introduction rule \[ \cfrac{\vdash A\rightarrow B} {\vdash A\rightarrow \forall xB} \] (where the proviso that \(x\) is not free in \(A\) holds) tells us that if we can prove an instance of the generalisation \(\forall xB\), as a matter of logic, from some assumption which makes no particular claim about that instance, we can also prove the generalisation from that assumption. This axiom and rule seems to fit nicely with any interpretation of the first-order quantifiers in a range of substructural logics, from the weakest systems, to strong systems like \(\mathbf{R}\).

While the proof theory for quantifiers seems well behaved, the
generalisation to model theory for substructural logics has proved
difficult. Richard Routley
(1980)
showed that adding the rules for the quantifiers to a very weak
system of substructural logic \(\mathbf{B}\) fits appropriately with
the ternary relational semantics, where quantifiers are interpreted as
ranging over a domain of objects, constant across all of the points in
the model. This fact does *not* apply for stronger logics, in
particular, the relevant logic \(\mathbf{R}\). Kit Fine
(1989)
showed that there exists a complex formula which holds in all
constant domain frame models for \(\mathbf{R}\) but which is not
derivable from the axioms. The details of Fine’s argument are
not important for our purposes, but the underlying cause for the
mismatch is relatively straightforward to explain. In the constant
domain semantics, the universal generalisation \(\forall x Fx\) has
exactly the same truth conditions—at every point in the
model—as the family of instances \(Fx_1\), \(Fx_2\),
\(Fx_3,\ldots\), \(Fx_\lambda,\ldots\), where the objects of the
domains are enumerated by the values of the terms \(x_i\). So, the
quantified expression \(\forall x Fx\) is semantically
indistinguishable from the (possibly infinite) conjunction \(Fx_1\land
Fx_2\land Fx_3\land\cdots \). However, no conjunction of instances
(even an infinite one) could be *relevantly* equivalent to the
universally quantified claim \(\forall x Fx\), because the instances
could be true in a circumstance (or could be made true by a
circumstance) without also making true the generalisation—if
there had been more things than those. So, constant domain models seem
ill-suited to the project of a relevant theory of quantification.

Robert Goldblatt and Edwin Mares
(2006)
have shown that there is a natural alternative semantics for relevant
quantification, and this approach turns out to be elegant and
relatively straightforward. The crucial idea is to modify the ternary
relational semantics just a little, so that not every set of points
need count as a ‘proposition’. That is, not every set of
points is the possible semantic value for a sentence. So, while there
is a set of worlds determined by the infinite conjunction of instances
of \(\forall xFx\): \(Fx_1\land Fx_2\land Fx_3\land\cdots \), that
precise set of worlds may not count as a proposition. (Perhaps there
is no way to single out those particular objects in such a way as to
draw them together in the one judgement.) What we *can* say is
the generalisation \(\forall xFx\) and that is a proposition that
entails each of the instances (that is the universal quantifier
elimination axiom), and if a proposition entails each instance, it
entails the generalisation (that is the introduction rule), so the
proposition expressed by \(\forall xFx\) is the *semantically
weakest* proposition that entails each instance *Fa*. This
is precisely the modelling condition for the universal quantifier in
Goldblatt & Mares’ models, and it matches the axioms
exactly.

## 6. Logics without Cut or Identity

As we saw in
Section 1,
structural rules come in two kinds, those that constrain the
behaviour of premise or conclusion combination on one side of a
sequent (commutativity, contraction, weakening) and those that
essentially involve the connection between one side of the sequent and
the other (identity, and cut): our treatment of proofs and models for
substructural logics have focussed on substructural logics with
distinctive choices of structural rules of the first kind. In this
last section, it is time to focus on more recent work on substructural
logics that reject either *Cut* or *identity*.

Logics without the *Cut* rule have non-transitive consequence
relations. In the philosophical literature, there are a number of
different reasons cited for rejecting transitivity of consequence in
general. One reason is to focus on *material* consequence (see
Brandom 2000)
or *default* consequence (see
Reiter 1980,
and the entry on
*Non-Monotonic Logic*)
rather than necessary and formale consequence. (From Tweety being a
bird, the default inference that Tweety flies is appropriate. And,
from Tweety being a penguin, we can infer that Tweety is a bird.
However, it is in appropriate to cut out the middle step and to infer
that Tweety flies from the premise that Tweety is a penguin.)

Another reason to reject transitivity of validity is on grounds of
*relevance*. Neil Tennant has championed a relevant logic that
rejects the unrestricted *Cut* rule on relevance grounds
(2017).
In Tennant’s *core logic*, both \(\neg p\vdash \neg
p\lor q\) and \(p,\neg p\lor q\vdash q\) hold, but we cannot use
*Cut* to derive \(p,\neg p\vdash q\), for here, the \(q\) is
irrelevant and a logical free-rider. In core logic, \(p,\neg p\vdash
\) holds (since \(p\) and its negation are jointly inconsistent) but
we cannot use weakening to conclude \(p,\neg p\vdash q\). In this
logic, the cut rule does not hold in full generality, but a
restriction of it *does* hold. If \(X\vdash A,Y\) and
\(X',A\vdash Y'\), then there are *subsets* \(X^*\subseteq
X\cup X'\) and \(Y^*\subseteq Y\cup Y'\) where \(X^*\vdash Y^*\).
Restricting *Cut* on core logic grounds would be of no help to
the logician who wants to respond to the semantic paradoxes by
admitting \(\vdash T\lambda\) and \(\vdash T\lambda\vdash\bot\) while
rejecting \(\vdash\bot\).

The cut-free logic **ST** of strict/tolerant validity
mentioned above
is a very different kind of cut-free logic than either a logic of
material or default inference or Tennant’s core logic. For one
matter, at the level of valid sequents, it just *is* classical
logic, for it is given by Gentzen’s sequent calculus without the
*Cut* rule, and as Gentzen has shown
(1935),
the *Cut* rule is eliminable.

The model theory for **ST** is remarkably simple. We
evaluate formulas with respect to a *tripartite* evaluation.
(This could be understood as a function assigning to each formula one
of the values \(0\), \(i\) and \(1\), or equivalently as a
*partial* function assigning to each formula either \(0\) or
\(1\), but perhaps failing to assign a value in soem cases.) We
evaluate complex formulas using the standard classical
truth-conditions, naturally modified to allow for partial valuations:
\(v(\neg A)=1\) iff \(v(A)=0\), and \(v(\neg A)=0\) iff \(v(A)=1\);
\(v(A\amp B)=1\) iff \(v(A)=1\) and \(v(B)=1\), and \(v(A\amp B)=0\)
iff \(v(A)=0\) or \(v(B)=0\), and so on. (This is one definition of
valuations for
Kleene’s strong three-valued logic.)
We will say that such a valuation counts as an
**ST**-counterexample to a sequent \(X\vdash Y\) if and
only if it assigns \(1\) to every formula in \(X\) and \(0\) to every
formula in \(Y\). If we think of the formulas assigned \(1\) as
*strictly* true (or has assertible when we hold ourselves to
strict standards) and those that are not assigned \(0\) as
*tolerantly* true (or as assertible when we hold ourselves to
more tolerant standards), then the valid arguments are those for which
whenever the premises are *strictly* true, the conclusion is at
least *tolerantly* true. Hence then name, **ST**.
Since validity is not preservation of a fixed value (the consequent
standards are lower than the antecedent standards), the *Cut*
rule can, in general, fail.

It is not too difficult to see how this substructural logic is an
attractive site to consider the semantic paradoxes. Given the liar
sentence \(\lambda\) the derivation of \(\vdash T\lambda\) tells us
that the liar is at least tolerantly true. The derivation of
\(T\lambda\vdash\bot\) tells us that the liar cannot be
*strictly* true. It is fixed in the middle, with both it and
its negation remaining tolerantly true, but not strictly true.

In a similar way, each of the conditional claims in a
sorites paradoxical argument
of the form \(p_n\rightarrow p_{n+1}\) can be taken to be at least
*tolerantly* true, since with a sufficiently slow variation
between cases, there will never be an instance where \(p_n\) is
strictly true and \(p_{n+1}\) is strictly false.

Once the **ST** response to the paradoxes was considered,
rejecting the *remaining* structural rule, *identity*
was natural to consider. Hence, Fjellstand
(2015)
and French
(2016)
have examined the logic of the sequent calculus without
*identity*. This, too, has a natural semantics in partial
valuations. If we retain *Cut* but discard identity, then the
natural logic is **TS**: a sequent \(X\vdash Y\) has a
**TS**-counterexample when there is a valuation that
sends every member of \(X\) *away from \(0\)* (i.e., they are
at least tolerantly true) and every member of \(Y\) *away from
\(1\)* (i.e., they are not strictly true). A valuation that
assigns \(p\) neither \(0\) nor \(1\) is thereby a counterexample to
the sequent \(p\vdash p\) and so, we have the means to invalidate the
rule of identity.

These substructural logics: those without cut and identity, raise the
question of their nature as *logics* in a way that the first
generation of substructural logics did not. **ST** has
exactly the same provable sequents in the logical vocabulary as
classical logic, and **TS** has none. Is
**ST** identical to classical logic? Is
**TS** identical to any other logic with no derivable
sequents? To flatly answer “yes” to these questions would
be to miss the distinctive behaviour of these logical systems.
**ST** differs from classical logic not at the level of
*sequents* but at the level of the relations between sequents.
The *Cut* rule, which can be understood as an inference rule
relating sequents, is valid under any class of two-valued valuations,
but is invalid in **ST**. Similarly, although at the
level of sequents the logic **TS** is empty, at the level
of relationships between sequents, it is rich. From this observation
has arisen the recent study of *meta-inferences* (see
Barrio *et al* 2014,
and
Barrio *et al* 2019).
Reflection on the natural way to understand meta-inferences between
sequents in logics like **ST** and **TS**
which use two different distinguished values has raised a natural
question. If we have two different grades of validity (strict and
tolerant) at the level of an individual formula, then why restrict
ourselves to a single validity verdict when it comes to a sequent? In
fact, two natural grades of validity for (level 2) meta-inferences are
available to us. A **ST** validity is a much more
tolerant criterion than **TS** validity. We can say that
a sequent-to-sequent meta-inference is **TS/ST** valid if
whenever the premise sequents are **TS** valid, then the
conclusion sequent is **ST** valid. It turns out, then,
that the meta-inference of *Cut* is indeed
**TS/ST** meta-valid, and the **TS/ST**
valid meta-inferences are exactly the family of meta-inferences
traditionally valid in classical logic, and this result naturally
generalises up the hierarchy of *n*-level meta-inferences for
all *n*
(Barrio *et al* 2019).

The connection between the cut-free sequent calculus and three-valued
logics has a long history, pre-dating its incorporation into recent
work on substructural logics. Schütte
(1960)
used a non-deterministic three-valued semantics for the cut-free
classical sequent calculus, and this work was extensively used by
Girard
(1987b,
Chapter 3), in his analysis of the semantics of the cut-free sequent
caclulus. The application of three-valued methods to the classical
sequent calculus in the absence of the structural rule of
*identity* was also considered in Hösli and Jäger
(1994).
This work has application to understanding phenomena in logic
programming
(Jäger, 1993).

## Bibliography

A comprehensive bibliography on relevant logic was put together by Robert Wolff and can be found in Anderson, Belnap and Dunn 1992. The bibliography in Restall 2000 is not as comprehensive as Wolff’s, but it does include material up to the end of the 20th Century.

### Books on Substructural Logic and Introductions to the Field

- Anderson, A.R., and Belnap, N.D., 1975,
*Entailment: The Logic of Relevance and Necessity*, Princeton, Princeton University Press, Volume I. - Anderson, A.R., Belnap, N.D. Jr.,
and Dunn, J.M., 1992,
*Entailment*, Volume II, Princeton, Princeton University Press.

[This book and the previous one summarise the work in relevant logic in the Anderson–Belnap tradition. Some chapters in these books have other authors, such as Robert K. Meyer and Alasdair Urquhart.] - Barrio, E. and Égré (2022)
“Substructural Logics and Metainferences”
*Journal of Philosophical Logic*, 51: 1215–1231.

[A summary of recent work on reflexivity and the*Cut*rule and characterising different logics in terms of the metainferences they admit.] - Bimbó, K. and Dunn, J. M., 2008,
*Generalised Galois Logics: Relational Semantics of Nonclassical Logical Calculi*, CSLI Lecture Notes, v. 188, CSLI, Stanford.

[A systematic treatment of the ternary relational semantics and its generalisations for substructural logics.] - Dunn, J. M. and Restall, G., 2000,
“Relevance Logic” in F. Guenthner and D. Gabbay (eds.),
*Handbook of Philosophical Logic*second edition; Volume 6, Kluwer, pp 1–136.

[A summary of work in relevant logic in the Anderson–Belnap tradition.] - Galatos, N., P. Jipsen, T.
Kowalski, and H. Ono, 2007,
*Residuated Lattices: An Algebraic Glimpse at Substructural Logics*(Studies in Logic: Volume 151), Amsterdam: Elsevier, 2007. - Mares, Edwin D., 2004,
*Relevant Logic*:*a philosophical interpretation*Cambridge University Press.

[An introduction to relevant logic, proposing an information theoretic understanding of the ternary relational semantics.] - Moortgat, Michael, 1988,
*Categorial Investigations: Logical Aspects of the Lambek Calculus*Foris, Dordrecht.

[An introduction to the Lambek calculus.] - Moot, Richard and Retoré
Christian (2012)
*The Logic of Categorical Grammars*, Springer, Berlin.

[An introduction to the Lambek calculus, including its non-associative variant, multimodal calculi, linear logic and proof nets.] - Morrill, Glyn, 1994,
*Type Logical Grammar: Categorial Logic of Signs*Kluwer, Dordrecht

[An introduction to the Lambek calculus.] - Ono, Hiroakira, 2019,
*Proof Theory and Algebra in Logic*Springer, Singapore

[Not exclusively on substructural logics, this treatment of the relationship between proof theory and algebra treats logical systems quite generally, and includes substructural logics in its remit.] - Paoli, Francesco, 2002,
*Substructural Logics*:*A Primer*Kluwer, Dordrecht

[A general introduction to substructural logics.] - Read, S., 1988,
*Relevant Logic*, Oxford: Blackwell.

[An introduction to relevant logic motivated by considerations in the theory of meaning. Develops a Lemmon-style proof theory for the relevant logic \(\mathbf{R}\).] - Restall, Greg, 2000,
*An Introduction to Substructural Logics*, Routledge. (online précis)

[A general introduction to the field of substructural logics.] - Routley, R., Meyer, R.K.,
Plumwood, V., and Brady, R., 1983,
*Relevant Logics and their Rivals*, Volume I, Atascardero, CA: Ridgeview.

[Another distinctive account of relevant logic, this time from an Australian philosophical perspective.] - Schroeder-Heister, Peter, and
Došen, Kosta, (eds), 1993,
*Substructural Logics*, Oxford University Press.

[An edited collection of essays on different topics in substructural logics, from different traditions in the field.] - Troestra, Anne, 1992,
*Lectures on Linear Logic*, CSLI Publications

[A quick, easy-to-read introduction to Girard’s linear logic.] - Zardini, Elia, 2021, “Substructural
Approaches to Paradox,”
*Synthese*, 199: 493–525.

[A summary of recent work on substructural approaches to paradox.]

### Other Works Cited

- Ackermann, Wilhelm, 1956,
“Begründung Einer Strengen Implikation,”
*Journal of Symbolic Logic*, 21: 113–128. - Avron, Arnon, 1988, “The Semantics and Proof
Theory of Linear Logic,”
*Theoretical Computer Science*, 57(2–3): 161–184. - Barrio, Eduardo; Lucas
Rosenblatt, and Diego Tajer, 2014, “The Logics of
Strict-Tolerant Logic,”
*Journal of Philosophical Logic*, 44:5, 551–571. - Barrio, Eduardo; Federico Pailos,
and Damian Szmuc, 2019, “A Hierarchy of Classical and
Paraconsistent Logics,”
*Journal of Philosophical Logic*, 49:1, 93–120. - Bellin, Gianluigi; Martin
Hyland, Edmund Robinson, and Christian Urban, 2006, “Categorical
Proof Theory of Classical Propositional Calculus,”
*Theoretical Computer Science*, 364: 146–165. - Brandom, Robert, 2000
*Articulating Reasons*, Harvard University Press. - Church, Alonzo, 1951, “The Weak Theory of
Implication,” in
*Kontrolliertes Denken: Untersuchungen zum Logikkalkül und zur Logik der Einzelwissenschaften*, A. Menne, A. Wilhelmy and H. Angsil (eds.), Kommissions-Verlag Karl Alber, 22–37. - Cobreros, Pablo; Paul
Égré, David Ripley and Robert van Rooij, 2012,
‘Tolerant, Classical, Strict’
*Journal of Philosophical Logic*, 21:347–385. - Curry, Haskell B., 1977,
*Foundations of Mathematical Logic*, New York: Dover (originally published in 1963). - Dunn, J.M., 1991, “Gaggle Theory: An
Abstraction of Galois Connections and Residuation with Applications to
Negation and Various Logical Operations,” in
*Logics in AI, Proceedings European Workshop JELIA 1990*(Lecture notes in Computer Science, Volume 476), Berlin: Springer-Verlag. - –––, 1993, “Star and
Perp,”
*Philosophical Perspectives*, 7: 331–357. - Field, H., 2008,
*Saving Truth from Paradox*, Oxford: Oxford University Press. - Fine, K., 1989, “Incompleteness for
Quantified Relevance Logics,” in J. Norman and R. Sylvan (eds.),
*Directions in Relevant Logic*, Dordrecht: Kluwer, pp. 205–225. - Fjellstand, Andreas, 2015. “How a
Semantics for Tonk Should Be,”
*Review of Symbolic Logic*, 8:488–505. - French, Rohan, 2016. “Structural
Reflexivity and the Paradoxes of Self-Reference,”,
*Ergo*, 3 No. 05. doi:10.3998/ergo.12405314.0003.005 - Geach, P. T., 1955, “On Insolubilia,”
*Analysis*, 15: 71–72. - Gentzen, Gerhard, 1935, “Untersuchungen
über das logische Schließen,”
*Mathematische Zeitschrift*, 39: 176–210 and 405–431.

[An English translation is found in Gentzen 1969.] - –––, 1969,
*The Collected Papers of Gerhard Gentzen*, M. E. Szabo (ed.), Amsterdam: North Holland, 1969. - Girard, Jean-Yves, 1987, “Linear
Logic,”
*Theoretical Computer Science*, 50: 1–101. - –––, 1987b,
*Proof Theory and Logical Complexity*, Volume 1, Bibliopolis, Naples. - Goldblatt, R., and E. Mares, 2006,
“An Alternative Semantics for Quantified Relevant Logic,”
*Journal of Symbolic Logic*, 71(1): 163–187. - Hösli, Brigitte and Gerhard Jäger,
1994. “About Some Symmetries of Negation,”
*Journal of Symbolic Logic*59(2):473–485. - Jäger, Gerhard, 1993, “Some
proof-theoretic aspects of logic programming,” p. 113–142
in
*Logic and Algebra of Specification*, F. Bauer, W. Brauer and H. Schwichtenberg (ed.), Springer. - Lambek, Joachim, 1958, “The Mathematics of
Sentence Structure,”
*American Mathematical Monthly*, 65: 154–170. - –––, 1961, “On the
Calculus of Syntactic Types, ” in
*Structure of Language and its Mathematical Aspects*(Proceedings of Symposia in Applied Mathematics, XII), R. Jakobson (ed.), Providence, RI: American Mathematical Society. - Moh Shaw-Kwei, 1950, “The Deduction Theorems
and Two New Logical Systems,”
*Methodos*, 2: 56–75. - Moortgat, Michael, 1995, “Multimodal
Linguistic Inference,”
*Logic Journal of the IGPL*, 3: 371–401. - Ono, Hiroakira, 2003, “Substructural Logics
and Residuated Lattices – an Introduction,” in V.
Hendricks and J. Malinowski (eds.),
*Trends in Logic: 50 Years of Studia Logica*, Dordrecht: Kluwer, 2003, pp. 193–228. - Reiter, Raymond, 1980. “A Logic for Default
Reasoning,”
*Artificial Intelligence*, 13:81–132. - Ripley, David, 2012, “Conservatively
Extending Classical Logic with Transparent Truth’
*The Review of Symbolic Logic*, 5:354–378. - Rooij, Robert van, 2011, “Vagueness,
tolerance and non-transitive entailment,’ in P. Cintula, C.
Fermüller, L. Godo, and P. Hajek (eds.),
*Reasoning Under Vagueness: Logical, Philosophical and Linguistic Perspectives*, College Publications, pp. 205–222. - Routley, R., 1980. “Problems and Solutions
in Semantics in Quantified Relevant Logics,” in A. Arruda, R.
Chuaqui, and N.C.A. Da Costa (eds.),
*Mathematical Logic in Latin America*, Amsterdam: North Holland, 1980, pp. 305–340. - Schütte, Kurt, 1960. “Syntactical and
Semantical Properties of Simple Type Theory,”
*Journal of Symbolic Logic*, 25:305–326. - Tennant, Neil, 2017.
*Core Logic*, Oxford University Press. - Troelstra, A.S., 1992, “Lectures on
Linear Logic”,
*CSLI Lecture Notes*(Number 29), Stanford: CSLI Publications. - Weir, Alan, 2005. “Naive Truth and
Sophisticated Logic” in Jc Beall and B. Armour-Garb (eds.),
*Deflationism and Paradox*, pages 218–249. - Zardini, Elia, 2008. “A Model of
Tolerance,”
*Studia Logica*90(3), 337–368.

## Academic Tools

How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database.

## Other Internet Resources

- Restall, Greg, 1994,
*On Logics Without Contraction*, Ph. D. Thesis, The University of Queensland. - Slaney, John, 1995, MaGIC: Matrix Generator for Implication Connectives, a software package for generating finite models for substructural logics.