Notes to Natural Deduction Systems in Logic

1. It might be noted, however, that Gödel and Bernays both presented natural deduction systems in lectures, at Notre Dame and Princeton respectively, in the late 1930s. Details in Došen and Adžić (2018) for Gödel and in von Plato (2008) for Bernays. A detailed historical account of the textbooks can be found in Pelletier (1999).

2. This phrase seems to have been used (with the appropriate mock-religious fervor) first by Routley and Meyer (1973: 199):

None of the model structures that Kripke made, nor that Hintikka made,…were, however, relevant. This caused great sadness in the city of Pittsburgh where dwelt the captains of American Industry. The Logic Industry was there represented by Anderson, Belnap & Sons, discoverers of entailment and scourge of material impliers, strict impliers, and of all that to which their falsehoods and contradictions led. Yea, every year or so Anderson & Belnap turned out a new logic, and they did call it E, or R,…, and they beheld each such logic, and they were called relevant. …Word that Anderson & Belnap had made a logic without semantics leaked out. Some thought it was wondrous and rejoiced, that the One True Logic should make its appearance among us in the Form of Pure Syntax, unencumbered by all that set-theoretical garbage.

Of course, this was not Natural Deduction being held as a logic to worship, but Relevance Logic.

3. We will go into more detail on needed changes to the rules in §5. For a more leisurely presentation of a natural deduction system for classical logic, see Shapiro and Kouri Kissel 2021.

4. Gentzen (1934) gave natural deduction formalizations of both intuitionistic and classical logics. In this introductory section we mention only his formalization of classical logic. He also discussed sequent calculi for both intuitionistic and classical logics, as we discuss below in §4 and §5.

5. In the present proof, Gentzen has inferred \(\bot\) from \(\neg q\) and \(q\), and then used \(\neg\)-I to get the doubly negated \(r\). At this stage he employed double-negation elimination (\(\neg\neg\)-E)—one of Gentzen’s alternative ways to pass from intuitionistic logic to classical logic—to infer \(r\) and then employ \(\rightarrow\)-I. Gentzen’s main method of passing to classical logic was to allow instances of the Law of Excluded Middle (\(p\lor\neg p\)) to occur as “initial” formulas in trees (so: as logical axioms). We discuss this issue further in §4 and §5. See particularly note 23 below.

The particular presentations of Gentzen-style proofs in this document are due to Sam Buss’s LaTeX package, bussproofs.

6. The main method that Jaśkowski’s article describes resembles that of Quine (1950a), but we shall not further consider it, concentrating only on the method of the footnote. Details about the relationship of these two methods can be found in Pelletier (1999).

7. The actual rules (and their names) employed by Jaśkowski are different from the more modern ones we cite in this example. But the overall effect is about the same.

8. The particular presentations using Fitch’s method in this document are due to Johan Klüwer’s LaTeX package, fitch.

9. The liberal-minded may generalize the rule to allow an \(n\)-adic conjunction, for arbitrary finite \(n\), to be inferred from its \(n\) conjuncts. More conservative logicians will use repeated applications of the dyadic rule for this purpose.

10. The pedantic may think of this as two rules, depending on whether the left or right conjunct is the conclusion. Liberals will of course generalize the rule to allow \(n\)-adic conjunctions as premisses.

11. Similar variations for pedants, liberals, and conservatives.

12. It is easily seen that this rule can be restricted to yield only atomic formulas as conclusions: the effect of using it to establish a logically complex \(B\) as conclusion can be obtained by using various introduction rules after (possibly multiple) applications of the restricted rule. In our discussion of normalization, §5 below, we will tacitly assume this restriction.

13. The conclusion of this rule is obtained by adding a negation operator to the hypothesis of the subproof. Many authors dub the corresponding rule in which the conclusion subtracts a negation “negation elimination”. (Both rules sometimes go by the name reductio ad absurdum.) In classical logic, both rules are valid, but only the negation-adding rule holds in, e.g., intuitionistic logic. The negation-subtracting rule is very powerful: it can be taken as the only negation rule in classical logic, others being derivable from it. It does not fit naturally into the pattern of Int-Elim rules: in particular, the basic normalization result (see §5) depends on pairing \(\neg\)-I with the \(\neg\)-E rule as stated above. For these reasons we would urge that the negation subtracting rule be given a distinctive name. We like Indirect Proof.

14. Introduction/elimination rules are usually abbreviated, by stating the connective followed by -I (or -Int or -Intro), and by -E (or -Elim). We mostly use -I and -E.

15. This simple statement is sufficient if \(t\) is a constant or parameter, or built up from such by means of function-symbols. If the bound variables of the language are drawn from the same alphabet as the parameters, so that \(t\) may contain (or be) a bindable variable, some restriction is needed to prevent unintended “capture” of variables: \(t\) must not contain a variable bound by a quantifier in whose scope \(t\) occurs in \(A(x)\). The same restriction is needed in \(\exists\)-I. (This annoyance is not unique to systems of natural deduction: it affects, e.g., axiomatic systems as well.)

16. Again, variable capture is to be avoided if the formation rules of the language make it a possibility. So, add the restriction that the variable, \(x\), bound by the initial quantifier of \(\forall xA(x)\) must not also be bound by a quantifier within \(A(a)\) whose scope includes an occurrence of \(a\). The same restriction is needed for \(\exists\)-Elimination (and, of course, in analogous rules of non-natural deduction systems.)

17. In our opinion this pretense makes it a lot easier to understand what quantifiers and quantifier rules are doing, and it is often adopted by logicians wanting to get on with the job without bogging down in technicalities. Philosophers may worry (one of us used to) about the idea of names for indiscernible elements in some domains, or about names for all the elements in nondenumerable domains. Suffice it to say that (using about the same set-theoretic resources as are needed for the standard, Tarskian, account of quantifiers) the idea of an extension of a given language containing constants for all the elements of an arbitrary domain can be made completely rigorous.

18. Gentzen had used \(\supset\) as his symbol for the object language conditional, and therefore had the arrow symbol \(\longrightarrow\) available to represent the relationship between the lists of symbols. Since we employ \(\rightarrow\) as our object language conditional, we use \(\vdash\) as our way to represent this relationship…which is the most common convention in current literature.

19. Counting rules like \(\land\)-Elim [\((\phi\land\psi)\vdash\phi\) and \((\phi\land\psi)\vdash\psi\)] as just one rule.

20. In contrast, the presentation in Gentzen (1936) explicitly wrote out each of the formulas that a line in his proof depended on.

21. This method was used in Lemmon (1965) and Mates (1965), which were themselves very influential in popularizing this method.

22. Most rules involving suppositions are standard in this variation, but the \(\lor\)-E rule is rather different:

\[ \begin{align*} \text{From } & \vdash \Gamma, (\phi\lor\psi), \Delta\\ \text{Infer } & \vdash \Gamma, \phi, \psi, \Delta\\ \end{align*} \]

Cellucci (1992) describes this system and its metatheory: the system turns out to be surprisingly efficient, often giving quite compact proofs, despite the repetition of formulas in multiple sequences.

23. For example, any of the following would do to effect the extension to classical logic, as would various other possibilities. (The first is a simplification of Gentzen’s original rule, which had \((\phi\lor\neg\phi)\) formulas (\(\phi\) an atomic) as leaves of trees.)

Gentzen style proof: link to extended description below

(a)

Gentzen style proof: link to extended description below

(b)

Gentzen style proof: link to extended description below

(c)

Gentzen style proof: link to extended description below

(d)

Gentzen style proof: link to extended description below

(e)

[An extended description of the figures in note 23 is in the supplement.]

24. Fitch formulations for all these revised TONK rules are:

Fitch style proof: link to extended description below

(a)

Fitch style proof: link to extended description below

(b)

Fitch style proof: link to extended description below

(c)

And the alternative formulation where the TONK-I rule gets altered is:

Fitch style proof: link to extended description below

(d)

Fitch style proof: link to extended description below

(e)

Fitch style proof: link to extended description below

(f)

[An extended description of the figures in note 24 is in the supplement.]

25. Although there have been various instances of this view of language throughout the history of philosophy and linguistics, it has usually attributed most strongly to Wittgenstein’s Philosophical Investigations and the many like-minded works of the mid-twentieth century.

Copyright © 2021 by
Francis Jeffry Pelletier <francisp@ualberta.ca>
Allen Hazen <aphazen@ualberta.ca>

Open access to the SEP is made possible by a world-wide funding initiative.
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free