Necessity and Mathematical Truth

A clever man got caught in this net of language!
So it must be an interesting net.

— Ludwig Wittgenstein, Remarks on the Foundations of Mathematics, II-15

The nature of mathematical knowledge has long been a subject of debate (indeed, since the birth of western philosophy). It has always attracted attention for what seem like profound differences between itself and scientific, empirical knowledge. One such difference is that of necessity: mathematical truths seem forced upon us somehow, in such a way that we cannot deny their truth. Kant dealt with problems like necessity by argument for, among other things, a philosophical, metaphysical intuition. The logical positivists of the Vienna school and their counterparts in England partially dismantled Kant’s categorization of knowledge by arguing for a different conception of necessity, one which didn’t require metaphysics. Willard Quine attacked this categorization even further by responding to the positivists’ critique with an even more thoroughly empiricist critique of his own.
Ludwig Wittgenstein’s views on the philosophy of mathematics1 , as well as his views on most other philosophy, were partially intended as a response to the above sequence of argument. But rather than respond within the framework essentially bequeathed by Kant to the rest of philosophy, Wittgenstein sought to disregard that framework. This led to a potent conceptualization of the philosophy of mathematics, one which explains necessity much more convincingly than the mainstream positivists and their followers, like Quine.
The intent of this essay is to place Wittgenstein in relation to the more traditional approaches to the philosophy of necessary propositions, and to see how his views inform the traditional approaches.

The Traditional Debate

Alfred Ayer asserts2 that the truths of formal logic and mathematics are necessarily true. The best way to defend this assertion, he says, is to examine the cases in which such truths might seem to be confuted. He gives examples where we might think that two times five is not ten, or that a Euclidian triangle has internal angles which do not sum to 180 degrees. Ayer explains his examples by stating
[T]he principles of logic and mathematics are true universally simply because we never allow them to be anything else. And the reason for this is that we cannot abandon them without contradicting ourselves, without sinning against the rules which govern the use of language, and so making our utterances self-stultifying. In other words, the truths of logic and mathematics are analytic propositions or tautologies.

I will return to Ayer’s characterization of mathematical truths as analytic propositions shortly. First, let’s consider the explanation he offers for the necessity of mathematical truths.
It is interesting to note that both of Ayer’s example truths are not purely mathematical truths, but rather mathematical truths applied to the world. In the first case, his suggested confutation of “2 x 5 = 10” involves miscounting some group of (real) objects, which we expect to be different in number due to our translation of “2 x 5 = 10” from a statement about abstract numbers, to a statement about numbers of real objects. In the second case, Ayer refers to the measurement of a specific triangle supposed to be Euclidian, which is taken to be in error due to our translation of “All Euclidian triangles’ internal angles sum to 180 degrees” from a statement about abstract triangles, to a statement about physical triangles. In both cases, Ayer says that we will always adopt empirical hypotheses to explain our errors in counting or measurement, and that we will never adopt an explanation which says that ten is not always the product of two and five. To do so would be to contradict ourselves, which is verboten.
Ayer has not actually considered cases in which mathematical truths might be confuted. Rather, he’s given empirical applications of mathematical truths, and shown how errors in our empirical predictions based on those “empiricized” statements would not lead us to upset the mathematical truths on which the “empiricized” statements are based. This should not be surprising, for Ayer has done little more than assert that we will not waver in our conviction as to the truth of necessary, non-empirical statements, in light of empirical evidence, because we will not waver in our conviction as to their truth. I am reminded of W.V. Quine’s phrase in “Two Dogmas of Empiricism”: “Our argument is not flatly circular, but something like it. It has the form, figuratively speaking, of a closed curve in space”.
How might mathematical truths be necessary, then? We’re left only with Ayer’s statement that necessary truths are so because we cannot abandon them without falling into self-contradiction. At first glance this sounds good — but as Quine has pointed out, the notion of contradiction is just in need of explication as the notion of necessity. So Ayer’s assertion might still be of value, if we can find any good reasons to justify it.
However, I find Quine’s approach no more satisfactory than Ayer’s. While his efforts seem laudable, he fails to adequately explain the certainty we ascribe to mathematical truths. In fact, Quine has the same problem that I’ve ascribed to Ayer above. He explains the uncertain nature of all knowledge (including knowledge usually classified as non-empirical, such as logical and mathematical knowledge) with an analogy to a “man-made fabric, which only impinges on experience only along the edges.” Quine, too, makes mathematical truth amendable or revisable in light of empirical evidence. But because he fails to examine closely the possible revision of mathematical truths — they are merely part of the entire “fabric” of our knowledge which he discusses the revisability of — we’re left to speculate as to how that revision might occur. I suppose that it would be something like Ayer’s examples, where it’s not actual mathematical statements which are subjected to scrutiny, but “empiricizations” of those statements.
This point bears some expansion. In what is essentially Quine’s only example of the revision of necessary truths, he mentions the possibility, brought to light through work in quantum mechanics, that the law of the excluded middle does not always hold. The rejection of this law has been inconclusively debated, but the fact that debate has occurred indicates, according to Quine, that we are indeed willing, under extreme duress and weight of pragmatic considerations, to revise necessary truths such as those of logic and mathematics. The case of the excluded middle is taken as emblematic of any other possible revision of necessary truth.
There are two readings of the situation in which we would revise the law of the excluded middle. In the first, which is a more philosophically-oriented perspective from which logic has broader domain, the law would be rejected outright. Because the law purports to be universal, true in all situations, and there are situations in which the law does not hold, it is not really a law. In the second, which is more strictly mathematical perspective from which logic is considered more formally (and more importantly, only in relation to its ability to describe deduction in abstract, formal systems), the law is the same, on the surface. However, from the second perspective the law does not claim to hold universally. “Universally,” in the sense of all possible (including physical) worlds, that is. Rather, the law applies to well-formed formulas of the relevant system of logic. Such systems do not, on their own, make statements about quantum particles. Applications of the law of the excluded middle to situations involving measurements of position of quantum particles are empirical statements, which, following the above discussion of “empiricization,” are subject to the same problems as any other empirical statement, including all those which aren’t derived from non-empirical statements.
The dichotomy I present here indicates that there may be problems in applying this approach to Quine’s argument against the analytic/synthetic distinction. Quine is quite free to argue that some carefully-guarded truths, such as those of logic (from a philosophical rather than mathematical standpoint), are subject to his hypothetical revision, or even rejection. I suspect such an argument will have problems, but for the purposes of the current essay that is neither here nor there. As I am interested in discussing the necessity of mathematical statements — including under that aegis all theories of formal systems, even those identified with logic in its mathematical guise — it’s the second half of my dichotomy that Quine must respond to. I believe Quine’s only example of the revision of necessary truth to be, like Ayer’s examples, a false one. Like Ayer he examines revision by “empiricizing” (more subtly so, though) non-empirical truths. Thus Quine really has offered no justification for the idea that necessary truths could in principle be revisable.
Despite the flaws in their arguments, both Ayer and Quine hint at what I take to be a better explanation of necessary truth. Recall that Ayer maintains that “we cannot abandon them [mathematical truths] without contradicting ourselves, without sinning against the rules which govern the use of language.” In opposition, Quine holds that “any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system [of statements of our total knowledge].” Before explaining how I think both of these views could be somewhat correct, though, a little bit of motivation is helpful. For motivation, I examine the motivations of Ayer (and many of the other logical positivists) and Quine in turn.
In their attack on all things metaphysical, the logical positivists sought to revise the traditional categorization of knowledge made most popular by Immanuel Kant. In Kant’s picture of knowledge, there were four qualifiers, which combined in the various ways, gave analytic a priori, analytic a posteriori, synthetic a priori, and synthetic a posteriori knowledge. No knowledge was analytic a posteriori. To the synthetic a posteriori belonged the propositions of natural science. The analytic a priori included statements like “all bachelors are unmarried,” due to Kant’s characterization of analytic statements as those in which the predicate belonged to the subject as something which was covertly contained in the concept of the subject. Thus analytic truths (or judgments, in Kant’s terminology, which amounts to the same thing) “add nothing through the predicate to the concept of the subject, but merely break it up into those constituent concepts that have all along been thought in it, although confusedly.” Kant classed metaphysics and mathematics under the synthetic a priori. That is, he thought we gained new knowledge through them, but that we could gain that knowledge independently of experience.
The positivists attempted to eliminate metaphysics through application of the verifiability criterion. Something else had to be done with mathematics, which, despite the fact that it does not purport to speak about material affairs, appears to be very useful. Following what was apparently a misreading of Wittgenstein’s Tractatus logico-philosophicus3 , the logical positivists asserted that mathematical truths were tautologies. Thus mathematics was classed under analytic a priori knowledge, eliminating all synthetic a posteriori knowledge from Kant’s categorization.
Quine’s attacks on the program of analytic philosophy amounted to more shuffling of Kant’s classification of knowledge. As Quine believed that all knowledge was ultimately revisable, and thus answerable in some way to “the tribunal of experience,” he had no use for the analytic a priori. In fact, he believed there was no definite boundary between those concepts which are analytic, and those which are synthetic. So for Quine, the only knowledge is the synthetic a posteriori knowledge, though to state it that way is to do his view a slight disservice, since “synthetic” loses its meaning when it cannot be meaningfully opposed to “analytic.”
So, in one (loose) sense, much of the work in the analytic tradition since Kant has been to fiddle with his characterization of knowledge. My motivations here are similar, but with a positivist twist. Often a tool in the positivist tradition has been to say, in effect, “it’s not the answers that are wrong — it’s the questions, stupid.” A similar rejoinder may be made to those working in the analytic tradition, including Ayer and Quine, who have accepted Kant’s categorization of knowledge and simply contended themselves with reorganizing it. Look at the moves these philosophers have made: Ayer thought that the synthetic a priori category was senseless, so he either eliminated its members (metaphysics) or moved them to another category (mathematics). Quine thought the distinction between synthetic and analytic was senseless, so in effect, he moved mathematics and “definitional” truths (i.e. “all bachelors are unmarried”) to the synthetic a posteriori category.
In both cases, an alternative response was possible: mathematical truths are not “knowledge” as the laws of physics or the theories of biology are. Rather, following Wittgenstein’s later work in the philosophy of mathematics, mathematical truths are “rules of syntax.” Taking up such a view constitutes a response to the traditional, Kant-derived picture of knowledge which does not simply shuffle the categories. To view mathematical truths as rules of syntax is to leave behind the Kantian categorization entirely.

Rules of Syntax

Unfortunately, Wittgenstein’s exposition of his thought is, famously, obscure. This has led not only to widespread difficulty in understanding his ideas, but also to broad divergence in their interpretation and exegesis. Thus I elect to take a somewhat pragmatic stance for the remainder of this essay, attempting to convey the substance and worth of Wittgenstein’s ideas about rules of syntax, while avoiding the painstaking attention to detail that marks many of the explorations of his work. Such details are important for later study, but only serve to complicate the introduction to what is already an unorthodox approach to the problem of necessity.
Consider the game of Monopoly.4 There are a number of statements one can make about the workings of the game, on a purely formal level. Some of these statements are mostly restatements of the official rules, such as “any player who passes Go receives $200 from the bank, unless directed otherwise by a Chance or Community Chest card, or unless landing on Go To Jail.” Others are consequences of the official rules, such as “barring further effects of moves made from ‘free’ doubles rolls, and effects of Chance or Community Chest cards, no player can return to the space from which he begins his turn, on the same turn.” That is, they are not stated explicitly, but because they are consequences of the explicitly stated rules, they have just as much force as those rules. They are rules in their own right.
The stated rules of Monopoly can be interpreted as the “grammar” of a game of Monopoly. So too can the “derived” rules be interpreted. They give the “syntax” by which actions in the game may be taken. Consider what happened when, as a child, you played Monopoly. It’s likely that you, or someone you played against, cheated (everybody cheats — that’s what makes the game fun). Aside from “house rules” allowing it, cheating is of course against the rules; this is not expressly stated in the rules, but it is understood by the very fact that rules are given, that the rules given and only the rules given are to govern play. The rules, qua rules, serve a normative function. So what happens when an act of cheating is caught (that being the only case in which the legality of cheating comes into prominence)? A protest to the effect of “that’s not allowed!” Barring misunderstandings of the rules (which are rare in a game as simple as Monopoly, and a different sort of dispute than matters here), there is no serious quibbling over the legality of illegal acts. If one player moves five spaces upon a roll of only four on the dice, and the other player catches the first in this illegal game action, discussion like this is never heard:
“You moved five spaces instead of four.”
“It was okay just this time.”
Similarly, if a player draws a Chance card from the middle of the deck rather than the top, we do not hear:
“You can’t draw a card from the middle.”
“I’m allowed to when I play with the doggie.”
Explanations for rulebreaking of this sort are not given simply because the rules do not allow it. In this sense the rules do not serve simply as a guide for what might be appropriate actions, or suggest norms to adopt. They determine what counts as a move, and what does not. They banish any uncertainty5 from play of the game.
Now consider an analogy between Monopoly, and the practice of mathematics. Because Monopoly is a formal game, this analogy can be made precise, but this is not necessary for our purposes.
In mathematics, there are both stated and unstated rules. Formalizations of mathematics such as Russell and Whitehead’s Principia Mathematica, or Hilbert’s foundations program, offer examples of stated rules at a very basic level. Other stated rules include the axioms of group theory, real manifolds, and cardinal arithmetic.6 The unstated rules are of course more difficult to state, but they include all the practices of mathematicians which are not formalized.
In mathematics there are also derived rules. These are, of course, the theorems and applications of theorems. Examples of theorems include the Pythagorean theorem, representation theorems for various kinds of algebras, and the upward Löwenheim-Skolem theorem. Examples of applications of theorems include statements like “in Euclidian geometry, the hypotenuse of a right triangle with legs of lengths 3 and 4 is of length 5” and “there is an element of order 3 in a group of order 27.”7 As in Monopoly, these theorems are consequences of the rules given “at the beginning of the game.”
Again as in Monopoly, the statements of mathematics serve a normative purpose. Recall back to elementary school, learning arithmetic, or even college or high school, learning the rules of differentiation. Mistakes are met with explanations like:
“You didn’t use the right rule here. To take this derivative you must use the quotient rule.”
“This is wrong — you forgot to carry a one in this column.”
These examples are illustrative, but their didactic component should not be confused with their normative one. Though a didactic tone is often assumed when pointing out errors — that is, enforcing norms — it is inconsequential to the real matter at hand. The normative character of mathematical statements is brought out more at the extremes of normativity. Suppose a student fails to understand the distributive law. The student’s teacher can try to provide different explanations, or give examples. After a point no further explanation can be given, though — if the student still does not understand, there is nothing to be done. The student simply cannot do the mathematics, for no progress can be made without following the rules. If a piece of mathematics is not done according to the rules of mathematical syntax, it is rubbish — nonsense.
Given this hint at Wittgenstein’s approach to mathematics, some questions are immediate. What happened to truth?8 Or necessity? This view appears to involve some sort of conventionalism; what is its relation to more standard conventionalist accounts, and their critics? To put some of these issues in perspective I return to Ayer and Quine’s characterizations of necessity.
Ayer’s stance that we cannot abandon mathematical truths without contradicting ourselves sounds somewhat like the normative role that rules of syntax play. This is part of what I meant earlier when I said that both Ayer and Quine hinted at a better explanation of necessity. Ayer felt his way around the idea that somehow, mathematical truths were forced upon us. In this, he glanced at the notion “that mathematical truths are ‘certain’ in the uniquely normative sense that all possibility of doubt has been grammatically excluded: it simply makes no sense to doubt the truth of mathematical propositions.”9 But because Ayer attempted to explain this in terms of self-contradiction (a notion, as Quine pointed out, hopelessly tied up with those of analyticity and necessity), he lapsed into offering vaguely metaphysical answers, as in his explanation for why we are surprised by mathematics, despite its supposed tautological nature.10
On the other hand, Quine’s radical empiricism is reminiscent of the conventionalist notions implicit in Wittgenstein’s account of rules of syntax. Quine would have all knowledge be revisable, including the most certain parts of knowledge at the center of his metaphorical web. That is, all statements are subject to change in light of the appropriate evidence. For statements far from the periphery, this may mean something more like the outright rejection of one principle in favor of another, rather than the tweaking of a scientific hypothesis. But Quine barely discusses how such revision might occur, and like Ayer resorts to vague explanations of the circumstances under which we might revise a highly central truth. Likewise, statements at the center are highly guarded against change from the outside, and we will go to great lengths to preserve their truth. Pragmatic concerns reign supreme.
Wittgenstein’s rules of syntax offer both a more satisfying account of the necessity and certainty of Quine’s centrally-located truths of logic and mathematics, as they do for Ayer above. But they also offer revision, of a sort.
The game example above was hopefully suggestive — a heavy-handed attempt to parallel Wittgenstein’s characterization of language games. A lengthy passage11 is well worth quoting for its description of language games, as well as its implications for the question of revision.
I shall in the future again and again draw your attention to what I shall call language games. These are ways of using signs simpler than those in which we use the signs of our highly complicated everyday language. Language games are the forms of language with which a child begins to make use of words. The study of language games is the study of primitive forms of language or primitive languages. If we want to study the problems of truth and falsehood, of the agreement and disagreement of propositions with reality, of the nature of assertion, assumption, and question, we shall with great advantage look at primitive forms of language in which these forms of thinking appear without the confusing background of highly complicated processes of thought. When we look at such simple forms of language the mental mist which seems to enshroud our ordinary use of language disappears. We see activities, reactions, which are clear-cut and transparent. On the other hand we recognize in these simple processes forms of languages not separated by a break from our more complicated ones. We see that we can build up the complicated forms from the primitive ones by gradually adding new forms.

Wittgenstein saw the whole of language as made up of a patchwork of various language games. He had a similar view of mathematics (it being a part of language as a whole), seen as a language or practice of mathematicians. This multiplicity of language games comes into play in some of his discussion of rule-breaking. To adopt a new rule in place of an old one is not, as Quine would have it, a rejection of the truth of the latter. It is to take part in a new language game, taking as rules a different set of statements. This phenomenon is best exhibited in mathematics by the development of non-Euclidian geometries. When it was discovered that adopting the negation of the parallel postulate led to internally consistent, but non-Euclidian, geometries, two new language games sprung up to complement Euclidian geometry: Reimannian and Lobachevskian geometry. That is to say: different forms of language for different uses12 — different games with different rules. Anyone who has played Monopoly with “free parking money”13 will recognize, once you change the rules of a game, though the games may be similar14 , you are really playing a new game.
The adoption of these different language games is another instance of conventionalism, in whatever sense that word might apply to Wittgenstein. As it is the focus of much debate over Wittgenstein, conventionalism deserves a bit of attention before I close. Only a brief sketch, though, to indicate directions for future study — the conventionalism debate as it applies to Wittgenstein especially is marred by the misunderstanding and broad divergence of opinion I earlier attributed to Wittgenstein commentators.
Ayer moved toward “modified” conventionalism in the preface/postscript to the 1946 edition of Language Truth & Logic. There he claimed he had made a mistake earlier in thinking that a priori propositions were themselves linguistic rules. He held that “they can properly be held to be true, which linguistic rules cannot, they are distinguished also by being necessary, whereas linguistic rules are arbitrary. At the same time, if they are necessary, it is only because the relevant linguistic rules are presupposed.” He went on to essentially claim that though there were conventions underlying language, inferences and deductions made in it are still necessary. Ayer’s view is roughly similar to that held by many of the positivists.
The most influential modern response to the logical positivists’ form of conventionalism is given by Dummett15, who makes a distinction between “modified” and “full-blooded” conventionalism. According to Dummett, the modified conventionalist holds that all necessary propositions can be divided into two classes: statements of conventions, and further conventions derived from the stated conventions. Full-blooded conventionalism holds that only stated conventions are in fact conventions. Dummett argues that modified conventionalism (which to him is the only reasonable position to adopt for a non-stipulative justification of mathematical truth) must either collapse into full-blooded conventionalism, or give up the claim that all necessary truths are conventions. Dummett’s attack focuses on the modified conventionalist’s problem of identifying what exactly it is that makes derived conventions conventions themselves.
On Dummett’s reading, Wittgenstein was attempting, through his ideas about rules of syntax, to supplant the positivists’ modified conventionalism16 with a full-blooded version. Wittgenstein was supposedly responsible for the paradigmatic instance of full-blooded conventionalism, through his insistence that (quoting Dummett) “the logical necessity of any statement is always the direct expression of a linguistic convention. That a given statement is necessary consists always in our having expressly decided to treat that very statement as unassailable.” But, as Shanker says, “This is a distinctly odd way of stating the matter: whether a statement has been expressly designated or tacitly treated as a convention is quite irrelevant to the question of whether we should describe it as a convention on the basis of its use.” Wittgenstein himself was careful in his use of the word “convention”; in the Ambrose Lectures, he warned that it is “misleading” to call “‘2 + 2 = 4’ the expression of a convention” because “the situation with respect to it is comparable to the situation supposed in the Social Contract theory.” That is, the convention is understood, or implicit: we don’t come out and say “alright, we’re going to treat ‘2 + 2 = 4’ as a rule from now on.” This does not mean, however, that such conventions do not have the force of explicitly stated conventions. Recall our Monopoly example above. There were both stated rules and derived rules. Though the derived rules are rarely stated explicitly, aside from perhaps cases of misunderstanding, it’s understood that they too guide the play of the game. The derived rules hold the same force as the stated rules because of their source, in the stated rules.
This is not the most important problem with Dummett’s interpretation of Wittgenstein, though. As I indicated earlier, Wittgenstein’s approach is not meant to respond to the traditional one within its own framework. His approach leaves that framework behind completely. Shanker responds17 to Dummett:
The crux of the Logical Positivists’ position was that a proposition is “necessary” because of the semantic conventions determining the meanings of words and the syntactic rules that have been laid down for their combination. It simply makes no sense to try to extend this framework to cover Wittgenstein’s conception of mathematical propositions as rules, and it was precisely for that reason that Wittgenstein had moved in this direction in the first place. By encouraging us to treat mathematical propositions as rules rather than tautologies, Wittgenstein was trying to bring us to see that the compositional theory of meaning has no bearing whatsoever on the question of the certainty of mathematical truth. This is entirely a matter of the normativity of mathematical propositions: of the manner in which we use mathematical propositions as standards of correct representation.

There are more points at which Dummett and Wittgenstein and his followers are at odds, but these are the main ones. For the most part, I think that Shanker’s critique of Dummett points us in the right direction: constantly away from the traditional picture of meaning that Wittgenstein tried so hard to avoid. Other responses to Wittgenstein are possible — for example, Saul Kripke’s critique of Wittgenstein’s ideas on rule following is relevant because, as a form of language game, mathematics falls under languages games in general, and thus under Wittgenstein’s arguments about what it means to follow a rule — but best left for future study. Hopefully this paper has indicated the ways in which Wittgenstein’s approach to necessity avoids the traditional pitfalls, while also making that approach somewhat accessible to readers inclined to accept the traditional picture of knowledge.

— Josh Kortbein, Spring 2000

1 Though they are not quoted often in this essay (others’ reconstructions of Wittgenstein’s views tending to be much more accessible for quoting purposes), I have referred freely in my work at understanding Wittgenstein to a handful of his relevant works, including Remarks on the Foundations of Mathematics, Philosophical Remarks, The Blue and Brown Books, and Philosophical Investigations. William Brenner’s Wittgenstein’s Philosophical Investigations was also helpful earlier on in understanding Wittgenstein’s relationship to other philosophers and philosophical positions, though I did not refer to it directly in the writing of this essay. Finally, S.G. Shanker’s Wittgenstein and the Turning-Point in the Philosophy of Mathematics was invaluable in placing Wittgenstein in historical perspective, and my debt to it should be clear to anyone who reads it.

2 In Language Truth & Logic, chapter IV.

3 S.G. Shanker gives an account (p. 274 of Wittgenstein and the Turning-Point in the Philosophy of Mathematics) as follows:
One of the more subtle points of contact between the Tractatus reflections on mathematics and the themes developed in Philosophical Remarks lies in the repudiation of the thesis that mathematical propositions are tautologies. But whereas this became a central concern in the work of the 1930s, it was only indirectly expressed in the Tractatus. At 6.2-6.21 Wittgenstein maintained that mathematical propositions are pseudo-propositions (Scheinsätze) which ‘do not express a thought’. Earlier (4.1272) he had explained that such pseudo-propositions are nonsensical (unsinnig). But at 4.461-4.4611 he explicitly stated that tautologies are not nonsensical, and nowhere did he suggest that they are pseudo-propositions. Rather, they are well-formed but senseless propositions (sinnlose Sätze). It follows, then, that mathematical propositions cannot be tautologies: something which Wittgenstein did not spell out as such for the simple reason, perhaps, that no one had ever suggested otherwise.

4 … and grant me a little latitude — I can’t find my copy. And “Monopoly” is no doubt a trademark of the Parker Brothers corporation.

5 Save for the rolls of the dice — so no comments from the peanut gallery.

6 The use of the word “axiom” here is telling of the typical nature of the stated rules of mathematics.

7 I call these “applications of theorems” to indicate the status they often hold in mathematics: they are not general enough to be discussed separately, since they are instantiations of more general theorems, often involving universal or existential quantifiers. However, results like these are often invoked in relation to specific mathematical situations. “Application” is not meant to refer to any application to empirical science.

8 As this issue is somewhat ancillary, it is best tied up in a note. Shanker notes (p. 283) Wittgenstein’s problem:
[F]or it now remained to reconcile the claim that mathematical propositions are rules of syntax with the time-honoured conviction that they are true and that these truths are certain (LFM 55). The difficulty here is simply that we do not ordinarily regard rules as ‘true’; certainly not in the sense that they ‘express a thought’ or ‘describe a fact’. But then, in what sense are they ‘true’ or ‘certain’? The closest Wittgenstein came to discussing this problem was… ‘We cannot say of a grammatical rule that it conforms to or contradicts a fact. The rules of grammar are independent of the facts we describe in our language… The words “practical” and “impractical” characterise rules. A rule is not true or false’ (AWL 65, 70). But where does that leave mathematical truth? Later in the same course of lectures we find Wittgenstein content to refer somewhat elusively to the truth of mathematical propositions as deriving from the fact that they deal with concepts as opposed to meaningless marks (AWL 146 ff.). In subsequent discussions the issue was even further downplayed…
This any kind of “truth” we attribute to statements of mathematics is really more of a “correctness” than anything related to empirical truth, or a traditional theory of analytic statements.

9 Shanker, p. 285.

10 Here (LT&L, pp. 85-6) Ayer talks of beings of “infinite intellect” who would immediately see all consequences of mathematical axioms, and thus find nothing surprising about any possible deductions.

11 From Wittgenstein’s The Blue Book, p. 17 (dictated to his students at Cambridge in the thirties, and thus unusual in Wittgenstein’s corpus for its clarity).

12 There is mention in Wittgenstein of a similar point, arguing that Einstein showed that the geometry of space was in effect a grammar for our hypotheses about it: which one was chosen was mostly irrelevant, save for pragmatic considerations.

13 In which all payments made “to the community” which would usually go back into the back, are instead placed in the center and given to any player who lands on the Free Parking space.

14 “How do we compare games? By describing them — by describing one as a variation of another — by describing them and emphasizing their differences and analogies.” (Remarks on the Foundations of Mathematics, II-49)

15 In “Wittgenstein’s Philosophy of Mathematics”.

16 Among other things, whether or not the positivists indeed held a modified conventionalist view is something under debate. The typical logical positivist position on conventions is that they were in place prior to the derivations and inferences which lead to necessary truths. The necessary statements themselves were not taken to be conventions; see the Ayer quote above this point in the main body.

17 Shanker, p. 291