Philosophy of Mathematics

 The philosophy of mathematics plays an important role in analytic philosophy, both as a subject of inquiry in its own right, and as an important landmark in the broader philosophical landscape. Mathematical knowledge has long been regarded as a paradigm of human knowledge with truths that are both necessary and certain, so giving an account of mathematical knowledge is an important part of epistemology. Mathematical objects like numbers and sets are archetypical examples of abstracta,since we treat such objects in our discourse as though they are independent of time and space; finding a place for such objects in a broader framework of thought is a central task of ontology, ormetaphysics.  The rigor and precision of mathematical language depends on the fact that it is based on a limited vocabulary and very structured grammar, and semantic accounts of mathematical discourse often serve as a starting point for the philosophy of language. Although mathematical thought has exhibited a strong degree of stability through history, the practice has also evolved over time, and some developments have evoked controversy and debate; clarifying the basic goals of the practice and the methods that are appropriate to it is therefore an important foundational and methodologicaltask, locating the philosophy of mathematics within the broader philosophy of science.

In this chapter, I will try to convey a modern philosophical understanding of the subject as it practiced today. Our contemporary understanding has been shaped by traditional questions and concerns about the nature of mathematics, and Section 2 provides a broad overview of the general array of philosophical positions in place by the turn of the twentieth century. On the other hand, the nineteenth century is generally taken to represent the birth of ‘modern’ mathematical thought, and many new issues arose with the dramatic conceptual shifts that took place. Sections 3 and 4 summarize some of these developments. Our contemporary philosophical understanding has also been informed by nineteenth and twentieth century developments in mathematical logic, which can be viewed as a reflective, mathematical study of the methods of mathematical reasoning itself. Section 5 tries to explain what we have learned from such a study, in philosophical terms. Finally, Sections 6 and 7 try to convey, in broad strokes, some of the central lines of thought in the philosophy of mathematics today.

Traditionally, the two central questions for the philosophy of mathematics are: What are mathematical objects? How do we (or can we) have knowledge of them? Plato offers the following simple answers: abstract mathematical objects, like triangles and spheres, are forms, which have imperfect reflections in this world. Before we are born, our souls have direct interactions with these forms, though we forget most of what we know during the traumatic circumstances of our birth. Recapturing this knowledge is thus a process of recollection, which can be encouraged by the dialectical process. This position is illustrated by Plato’s portrayal of Socrates in Meno, when Socrates calls over a slave boy and, through a sequence of questions, brings the boy to understand a simple geometric theorem.

Socrates leads Meno to the conclusion that since Socrates did not tell the boy the theorem, the boy must have had knowledge of it all along. Although we today may have difficulty with the theory’s reliance on otherworldly forms and our soul’s prenatal activities, the account does have its advantages: it does justice to the mysterious abstract nature of mathematical objects, and explains why we do not have to appeal to our physical experiences to justify mathematical statements.

In contrast, the Aristotelian account of mathematical knowledge holds that mathematical objects, like triangles and spheres, are abstractions from our experiences. That is, from our interactions with various roughly spherical objects, we form the concept of a perfect sphere. Reasoning about spheres in general boils down to reasoning about specific spheres we have encountered, qua their sphericity; that is, we deliberately ignore features like size, weight, and material in the discourse. It is this disciplined behaviour that ensures that our conclusions are appropriately general, and even though the spheres we encounter in our experience are not perfect spheres, our conclusions apply insofar as they approximate the latter.

Thus the divide between Plato and Aristotle is an early example of tensions between philosophical theories that give primacy to abstract concepts, and those that give primacy to experience. This has formed the basis for the common distinction between rationalists and empiricists among the early modern philosophers, the former taking mathematics and ‘innate ideas’ as the paradigm of knowledge, and the latter basing their accounts of knowledge on the empirical sciences. At times, it can be hard to tell exactly what is at stake. For example, John Locke, as distinctly an empiricist as they come, allows for innate faculties like comparing, compounding, and abstracting, and reflecting on the inner workings of our mind; and he agrees that mathematical knowledge consists of certain knowledge of ideas, although these ideas must ultimately spring from experience.

René Descartes, the prototypical rationalist, maintains that mathematics is the paradigm of knowledge, since its truths can be obtained by a clear and unclouded mind reflecting on clear and distinct ideas. At the end of the Meditations, he allows that our physical models must have something to do with the world, since God is not a malicious deceiver. But what we have certain knowledge of are the mathematical concepts and relationships to each other, in contrast to the approximate knowledge that the world conforms, more or less, to our models.

The distinctions between mathematical and scientific knowledge are expressed in different ways by the early modern philosophers, and such reformulations provide different insights. For example, Gottfried Leibniz distinguished between necessary and contingent truths: the former, including the truths of mathematics, are true in all possible worlds, and could not be otherwise; the latter, like the facts of scientific discovery, could have been different.

Similarly, David Hume distinguished between relations of ideas and matters of fact. The truths of mathematics are grouped under relations of ideas, ‘every affirmation of which is either intuitively or demonstratively certain.’ In contrast, any matter of fact could have been otherwise: ‘the contrary of every matter of fact is still possible, because it can never imply a contradiction and is conceived by the mind with the same facility and distinctness, as if every so conformable to reality.’

The work of Immanuel Kant went a long way to clarify and defuse some of the differences between rationalist and empiricist stances. For example, Kant helpfully underscored the difference between asserting that a concept arises from experience, and asserting that it arises with experience. 7 According to Kant, the issue is not whether we are born with a concept of triangle or whether we develop this concept over time. Rather, the relevant fact is that an appropriate justification for assertions about triangles need not make reference to experience. Truths that have this character, like the truths of mathematics, Kant called a priori. The remaining truths, that is, those we justify by referring to our experiences, he called a posteriori. 

Kant went on to observe that one can separately distinguish between judgements that depend only on the definition of the concepts involved, and those that don’t. In fact, Kant only considered assertions in subject-predicate form; he called a judgement ‘A is B’ analytic when ‘the predicate B belongs to the subject A as something that is (covertly) contained in this concept A.’ For example, the statement that triangles have three sides relies only on the knowledge of the definition of triangle. This notion of analyticity requires clarification, and we will return to it in Section 4.  A judgement that is not analytic he called synthetic. 

According to Kant, any statement that is analytic is a priori; if the truth of a statement rests only on the definition of the concepts involved, then one need not appeal to experience to justify it. Conversely, any statement that is a posteriori has to be synthetic. But is there a middle ground, consisting of statements that are a priori but synthetic? Kant argued that nontrivial truths of mathematics fall exactly into this category. For example, the justification for the fact that 5 + 7 = 12 cannot be found in the definition of the concept of 5, or that of 7, or that of 12, or that of ‘+’, or that of ‘=‘. But, on the other hand, we don’t appeal to experimentation to justify the assertion.

Thus, this statement is a priori but synthetic. Given that there are a priori synthetic truths, the central question for Kant is to explain how this is possible. Posing the question in this way forms the basis for his transcendental philosophy, which is a form of ‘reverse engineering’: given that something works, we want to figure out the mechanisms by which it functions. In particular, by reflecting on our mathematical knowledge, Kant aimed to uncover the basic cognitive faculties that made such knowledge possible. Later followers of Kant faced the challenge of clarifying the line between the analytic and the synthetic, and determining what, concretely, could be classified as a priori synthetic.

A common feature of all the views just described is that they both take mathematics to deal with abstract objects, whether one takes these to have an independent existence in their own right, or to be abstracted from our experience. An alternative is simply to deny such objects ontological status in the first place, and think of mathematics, instead, as a science governing the use of (relatively concrete) signs. The challenge then is to give an account of mathematical knowledge that explains what it is that gives certain manipulations of signs normative force, and also explains the applicability of mathematics to the sciences. Positions that adopt such an approach fall under the rubric ofnominalism, of which Bishop Berkeley’s writings provide an early example.

Though the sketch I have just given is rough, it does provide a sense of the lay of the land. One can take mathematical objects to be independent and abstract, but then one is challenged to account for our knowledge of them; one can take them to be abstractions from experience, but then one has to account for (or deny) the apparent certainty of mathematical truths; or one can take mathematics to be a set of essentially linguistic conventions, in which case one needs to explain why their use is worthy of the term ‘knowledge.’ This vexing trichotomy plagues contemporary thought about mathematics even today.

Thus the philosophy of mathematics lies at a treacherous frontier, where theorizing about the nature of thought, language, and the world must come together. Almost everyone agrees, however, that whatever the nature of mathematical knowledge, mathematical proofs are central to its acquisition. This, at least, poses one useful constraint: any reasonable theory of mathematical knowledge has to square with that fact.

 

Nineteenth century developments in mathematics 

Historians of mathematics usually take the nineteenth century to be the birth of the ‘modern’ style of mathematical thought that is practiced today. Since much of the philosophy of mathematics in the twentieth century was focused on coming to terms with some of the dramatic changes that occurred in the previous century, it will be helpful to survey some of these developments.

One important trend was a gradual increase in abstraction. In particular, modern algebraic concepts began to emerge, and served to unify methods from a number of branches of mathematics. For example, a ‘group’ is a system of objects with an associative binary operation, an identity element, and inverses. By the middle of the nineteenth century, mathematicians had noticed that a number of important arguments in number theory, geometry, and the theory of equations could be understood as making use of general properties of groups, instances of which were found in these particular domains. So, just as numbers and triangles could be viewed as abstractions from experience, groups could be viewed as abstractions from various number systems and geometric configurations that arose in mathematical practice.

Algebraic reasoning of this sort relies, in part, on a style of reasoning that has come to be called theaxiomatic method: one characterizes systems, or ‘structures,’ of interest by their defining properties. Reasoning solely on the basis of these axioms grants the conclusions that one draws full generality, in the sense that any resulting theorems will then hold of any particular system that satisfies the axioms.

Developments in geometry illustrate such an axiomatic point of view. By the middle of the century, it had become clear that one could consistently study systems of points and lines satisfying axioms different from Euclid’s. So, instead of viewing geometry as the ‘true’ science of space, one could take geometry to be the study of properties of various geometries, that is, systems of points and lines satisfying various sets of axioms. This view of geometry as nothing more than the study of systems satisfying certain axioms is illustrated by David Hilbert’s landmark Foundations of Geometry, which begins as follows:

Definition. Consider three distinct sets of objects. Let the objects of the first set be called points and be denoted by A, B, C, …; let the objects of the second set be called lines and be denoted by a, b, c, …; let the objects of the third set be called planes and be denoted a, ß, γ, …. The points, lines, and planes are considered to have certain mutual relations and these relations are denoted by words like ‘lie,’ ‘between,’ ‘congruent.’ The precise and mathematically complete description of these relations follows from the axioms of geometry.

This passage illustrates a modern structuralist point of view, in which the objects of mathematical study are taken to be nothing more than elements of structures satisfying various axiomatic requirements. To support the general study of ‘systems’ of objects, mathematicians began to develop a language and methods to reason about such systems. Thus, Bernhard Riemann’s theory of complex functions, which made use of what we now call Riemann surfaces, led him to the general notion of amanifold of points; Richard Dedekind based his development of algebraic number theory on properties of systems of objects, and mappings between them; and Georg Cantor’s work in analysis led him to develop a general theory of infinite sets. 

These incipient uses of ‘set-theoretic’ methods came at a cost. At the beginning of the nineteenth century, mathematics was viewed as a science of calculation and construction. For example, existence theorems in Euclid assert that it is possible to construct various geometric objects; in algebra and analysis, one sought algorithms for computing solutions to equationsNow the growing emphasis on abstract characterization of mathematical structures began to draw mathematical practice away from algorithmic concerns. Indeed, thinking about mathematical objects in terms of abstract systems gave rise to the possibility of proving existence statements in the absence of explicit calculations, and such ‘conceptual’ methods were often preferred to explicitly computational ones.

One finds, for example, instances of proofs that deliberately suppress algorithmic information in Dedekind’s work on algebraic number theory. David Hilbert put such methods to even more striking use in the late 1880s with his famous Basissatz, which provided nonconstructive solutions to a broad family of questions of central importance to algebra. What is so striking about Hilbert’s work is that he employed the use of mathematical assertions that are computationally false; for example, the fact that in any sequence of polynomials there is an element with minimal degree, even though we may not be able to determine which element it is from an explicit description of the sequence.

More general notions of a function or mapping arrived hand-in-hand with the more general notions of aset (or system) of objects. For the eighteenth century mathematician Leonhard Euler, a ‘function’ was implicitly assumed to have a certain type of representation (that is, piecewise as a convergent sum of elementary functions). Riemann’s work in complex analysis showed that there are other ways of characterizing functions, say, by their algebraic, geometric, or topological properties. Thus a function became viewed as something more abstract, and independent of the various means of description.

The focus on ‘arbitrary’ sets and mappings (rather than on their means of representation or description) led to a newfound boldness in dealing with infinitary mathematical objects. It is important to note that these new developments were not uniformly welcomed, and there were mathematicians who felt that mathematical practice was in danger of wandering astray from its proper concerns.  These attitudes fed into the ‘foundational crisis’ of the early twentieth century, and even though such foundational debates have largely subsided, tensions between ‘abstract’ and ‘concrete’ views of mathematics remain with us to this day.

 

Nineteenth century developments in the foundations of mathematics 

Work in the foundations of mathematics aims to identify and clarify the subject’s fundamental concepts and methods. This task invariably involves presuppositions as to the nature of mathematics, and so foundational work provides a bridge between philosophical reflection and mathematical practice. The nineteenth century developments we have just discussed, including the cross-fertilization of ideas from different branches of mathematics and the extensions to mathematical terminology and methods, led naturally to reflective and foundational concerns.

Certainly the mathematical developments had bearing on philosophical views. For example, the introduction of infinitary mathematical objects and structures seems to challenge empiricist attempts to account for knowledge in terms of abstractions from experience, since it is not clear how we can have direct experience of the infinite. For another example, recall the Kantian program of accounting for a priori synthetic mathematical knowledge in terms of the nature of cognition. In particular, Kant famously (or notoriously) took Euclidean geometry to be a necessary component of our pure intuition of space. But the nineteenth century brought the gradual realization that one can consistently study a range of alternative geometries, and the modern view that mathematicians are free to study any of these systems, leaving the question as to which of these is most appropriate to modeling the physical world to the sciences. Thus the nineteenth century shift towards abstraction seemed to speak against the claim that certain forms of cognition are necessary to mathematical thought.

While raising some new concerns, work in foundations served to allay others. For instance, from its origins in the seventeenth century, mathematicians had been concerned with the foundations of the calculus, with its references to infinitesimal quantities and limiting processes. Berkeley’s eighteenth century critique of calculus, The Analyst, lay bare the contradiction inherent in treating infinitesimals as nonzero quantities that are nonetheless smaller than any positive magnitude: ‘May we not call them the ghosts of departed quantities’? (Berkeley’s goal was not to demean the calculus, but, rather, to buttress the relative rationality of belief in God. In the work, Berkeley mocked apostate scientists who ‘who strain at a gnat and swallow a camel.’)

In the nineteenth century, however, work by Bernard Bolzano, Augustin Cauchy, Riemann, Karl Weierstrass, and others showed how one could interpret notions of limit, integral, and derivative in terms of ordinary quantificational statements about the real numbers. This is sometimes referred to today as the rigorization of analysis, that is, the reduction of discourse involving infinitesimals to talk of ordinary real numbers.

But what, exactly, is a real number? Towards the end of the nineteenth century, further work (by Weierstrass, Cantor, Dedekind, and others) showed how one could make sense of real numbers in terms of sequences or sets of rational numbers, and it was well-known that the rationals could be understood in terms of pairs of natural numbers. But what about the natural numbers? One option is to take these as fundamental, as suggested by Kronecker’s oft-quoted remark, ‘God created the whole numbers; everything else is the work of man.’

Even here, however, some foundational thinkers sought further reduction. For example, Richard Dedekind showed how one could characterize the natural numbers uniquely up to isomorphism (that is, state exactly the properties of a system of objects that make it fit to stand duty as a system of natural numbers); and using his methods for reasoning about systems and mappings, he was able to establish the existence of at least one such system, from the assumption that there exists any infinite set at all.

Dedekind’s analysis amounts to a reduction of the natural numbers to an incipient theory of sets and functions. Gottlob Frege aimed for a similar reduction of the natural numbers to an appropriate system of logic. Recall that Kant had classified mathematical truths as being a priori synthetic. Challenging this, Frege argued that, in fact, the truths of arithmetic should be considered analytic, where now the term ‘analytic’ was enlarged to include truths that are obtained from definitions of the concepts involved by purely logical reasoning. The claim that substantial portions (or all) of mathematics can be reduced to logic has come to be called logicism. 

There are two components to any logicist program: identify a system of logic, and show how it can support the relevant mathematics. To be sure, there was no clear and universally accepted definition of ‘logic’ then, nor is there now. But starting with his Begriffsschrift of 1879, Frege began to describe systems of reasoning that, he argued, represent the necessary laws of thought.

His achievement was impressive, and his systems include a fairly modern form of higher-order relational and quantificational reasoning. Frege then began to develop the theory of the natural numbers on such a basis.

 

Twentieth century developments in mathematical logic 

Where do we stand? We have considered some of the questions regarding the nature of mathematical objects and our knowledge of them that have, traditionally, been of central concern to the philosophy of mathematics. We have also considered nineteenth century developments in mathematics that accentuated these issues or cast them in a new light. We have at least hinted at an interplay between philosophical views and methodological issues that had deep and lasting effects on the practice of the subject itself. Finally, we have touched on early foundational and logical developments, representing important attempts to come to terms with these issues.

The study of logic and the foundations of mathematics enjoyed explosive growth after the turn of the twentieth century, and most philosophical theorizing about mathematics since then has been strongly influenced by these developments. The results of this inquiry provide informative clarifications and analyses of notions like proof, truth, and computation, and modern philosophical discussions often rely on these, at least implicitly. Thus, it will be helpful to consider this current logical understanding, before (finally) describing some twentieth century philosophical views in the next section.

To begin with, developments in the early twentieth century led to the recognition that one can distinguish between two informal notions of ‘logical consequence,’ based on a corresponding distinction between syntactic and semantic aspects of mathematical language. Roughly, the syntax of a language consists of the formal grammatical rules that govern the construction of terms and assertions; a semantics is an account of what these syntactic elements mean. The fact that ‘Between any two elements, there is another element’ is a grammatically correct assertion and is a syntactic one; the fact that this assertion is true when interpreted as a statement about real numbers or points on a line, but false when interpreted as a statement about whole numbers, is a semantic one.

Now we can identify a syntactic notion of logical consequence: an assertion is a deductive consequence of some assumptions if it can be established by a sequence of steps, each representing a permitted inference according to the rules governing the appropriate use of logical connectives, that is, terms like ‘and,’ ‘if … then,’ ‘every,’ and ‘some.’ The early twentieth century saw the identification and analysis of such logical frameworks, in the form of formal axiomatic bases for logical reasoning.

There is also a semantic notion of logical consequence: a statement is a semantic consequence of some hypotheses if, no matter how the non-logical terms are interpreted, whenever the hypotheses are true under the interpretation, so is the conclusion. For example, the conclusion ‘Every X is a Z’ is a logical consequence of the assumptions ‘Every X is a Y’ and ‘Every Y is a Z,’ since no matter how we interpret X, Y, and Z, if the hypotheses are true, so is the conclusion. Developing a mathematical theory of semantic consequence required not just an analysis of mathematical language, but also a notion of ‘true under an interpretation.’ Alfred Tarski’s theory of ‘truth in a model’ provides us with just that.

For first-order logic, Kurt Gödel’s completeness theorem of 1929 shows that these two notions coincide: a statement is a deductive consequence of some hypotheses just in case it is a semantic consequence of these hypotheses. In contrast, second-order logic extends first-order logic with variables ranging over predicates and relations, and it is a consequence of Gödel’s incompleteness theorem that there is no sound, effective deductive system for second-order logic that is complete for the standard semantics, in which variables are taken to range over all predicates on the first-order domain. (Here, the term ‘effective’ denotes the reasonable requirement that the axioms and rules be expressed in such a way that there is an algorithmic procedure to determine whether a given text constitutes a valid proof.) Some take this to be an argument against considering second-order (and higher-order) logic to be properly called ‘logic.’ (See the discussion of this in Section 6.)

With a notion of deductive consequence in hand, one can take a mathematical theory to be the set of deductive logical consequences of a set of axioms describing a particular mathematical domain. The question naturally arises as to where to draw the line between logic and mathematics. For example, recall that Frege aimed to show that the truths of arithmetic could be reduced to logical truths, with the term ‘logic’ suitably construed. In 1902, however, Bertrand Russell observed that Frege’s logical system was inconsistent.

Roughly speaking, Russell’s paradox is that if one is allowed to form the set S of all sets that are not members of themselves, then S is a member of itself if and only if it isn’t, yielding a contradiction. (More precisely, Frege’s logical framework allowed one to consider the concept of being the extension of a concept which does not hold of its own extension, but the net effect is the same.) Henri Poincaré, Russell, and others diagnosed the problem as lying in the impredicativity of the definitions; for example, Russell’s definition of the set S involves a variable ranging over the collection of all sets, of which S itself is a member. Russell responded to the problem by introducing a ‘ramified’ theory of types, which bars this type of circularity by stratifying the language so that a definition can only quantify over concepts whose definitions are logically prior. Such a system was to form the basis of Russell and Alfred North Whitehead’s Principia Mathematica, in which portions of mathematics were developed on that basis. The system’s ramification, however, made it impossible to handle a number of ordinary mathematical developments, leading Russell to add an additional ‘axiom of reducibility.’ This addition drew criticism, since it is hard to justify its status as a ‘logical’ axiom. Within a few years, Leon Chwistek and Frank Plumpton Ramsey had argued that, on a logicist conception, one could justifiably dispense with Russell’s ramification.

Indeed, formal axiomatic frameworks for ‘simple type theory,’ developed by Rudolf Carnap, Kurt Gödel, and Alonzo Church, provide a more workable framework for mathematics while still avoiding the obvious paradoxes. Another axiomatic framework, ZermeloFraenkel set theory, was given its modern formulation as a theory based on first-order logic, and was shown to provide a remarkably robust foundation for mathematics.

In 1931, Gödel proved his famous incompleteness theorems. The first incompleteness theorem states that no effective deductive system for mathematics strong enough to prove some basic facts about the natural numbers can be complete; in other words, assuming the system is consistent, there will be statements that are neither provable nor refutable in the system. The second incompleteness theorem shows that no such system can prove its own consistency; a fortiori, it is not possible to demonstrate the consistency of a formal system using any weaker fragment. The second incompleteness theorem dealt a serious blow to Hilbert’s program, which will be discussed in the next section.

The 1930s gave rise to the modern theory of computability. We have already discussed nineteenth century shifts from algorithmic to set-theoretic reasoning, and the expanded notion of a function that came with it. With the new methods that had been introduced, set-theoretic language and methodology could be brought to bear on the characterization of functions, without a direct computational interpretation. A precise definition of what it means for a function, say, from natural numbers to natural numbers to be computable was required, however, before one could give explicit examples of functions that are not computable. This was provided by models of computability given by Alan Turing, Church, Jacques Herbrand, Gödel, and Emile Post. Even though the definitions they provided were, on the surface, quite different, it was soon shown that the definitions agree as to which functions are computable. This, combined with a conceptual analysis of the notion of computability given by Turing, has given weight to the Church-Turing thesis, namely, that these definitions in fact capture the informal notion of ‘computability.’ With this analysis, Turing was able to show that there are specific classes of mathematical problems that do not posses algorithmic solutions. The halting problem—that is, the question as to whether a given algorithm ultimately comes to a final state when presented with a given input—is one such class of problems.

In 1963, Paul Cohen, building on work by Gödel, showed that the ‘axiom of choice’ and Cantor’s ‘continuum hypothesis’ are independent of the Zermelo-Fraenkel axioms of set theory. In other words, these two fundamental assertions about sets cannot be derived or refuted from the conception of set given by the Zermelo-Fraenkel axioms. These provide two striking and important instances of the first incompleteness theorem.

Thus early twentieth century research helped clarify mathematical language, the rules of mathematical inference, fundamental mathematical assumptions, and the notion of computability; as well as the limits of formal provability, computability, and definability.

 

Early twentieth century philosophical views 

The advances in mathematical logic just described developed in tandem with inquiry into the nature of mathematics, and we are finally in a position to consider some of the philosophical stances that emerged. Recall that Russell and Whitehead, with the Principia, aimed to revive Frege’s logicist project. The common view today is that logicism has failed, since the various foundations that have been proposed for mathematics seem to rely on axioms that do not have a purely ‘logical’ character, like the assertion that there is an infinite set. W. V. O. Quine was further critical of viewing higher-order reasoning itself as properly ‘logical’ reasoning, arguing that since it is a mistake to admit predicates as logical objects, higher-order logic is really ‘set theory in sheep’s clothing.’

Thus a general picture emerges in which one views mathematics as consisting of the logical consequences of appropriate mathematical axioms. This has the effect of distinguishing the foundations of mathematics from the foundations of logic; the philosophy of mathematics can then focus on the status of mathematical objects and axioms, consigning the task of accounting for logic and its normative status to a separate office.

In the 1910s, the mathematician L. E. J. Brouwer spearheaded a new movement in mathematics known as intuitionism. This movement had both philosophical and methodological components. On the philosophical side, Brouwer gave a somewhat solipsistic account of mathematical knowledge in terms of intuitive constructions; roughly, one’s assertion that a mathematical statement is true is tantamount to the assertion that one has effected a mental construction that allows one to recognize that this is the case. Basic properties of the natural numbers, say, were to be rooted in our intuitions of time. Logical connectives, however, were also given interpretations in terms of intuitive constructions; for example, an assertion that ‘A or B’ is true is understood to mean that either one has a construction that enables one to see that A is true, or one has a construction that enables one to see that B is true (and one knows which is the case). Asserting the truth of an implication ‘A implies B’ amounts to asserting that one is in possession of a construction that transforms sufficient evidence that A is true into sufficient evidence that B is true. These views have strong implications for the practice of mathematics; for example, on this view, the assertion ‘A or not A’ is not justified until one knows which is the case. Thus Brouwer rejected tertium non datur, or the law of the excluded middle, as a generally valid principle of reasoning. This placed serious restrictions on the type of mathematics that one could practice, and, in particular, ruled out the kind of nonconstructive arguments discussed in Section 3. On the other hand, as noted by many logicians over the next two decades, the Brouwerian principles of reasoning could be given a direct computational interpretation. As a result, intuitionistic philosophy was closely allied with a ‘constructive,’ or algorithmic, mathematical orientation.

In light of Russell’s paradoxes and similar concerns, some felt a philosophical retrenchment, like Brouwer’s, was called for. But although he was sensitive to questions of consistency, Hilbert felt that rejecting modern set-theoretic methods was tantamount to throwing out the baby with the bathwater, and he strongly resisted any restrictions on these newfound mathematical freedoms. In 1922, he launched his program of Beweistheorie, or Proof Theory, which was to ‘settle the question of foundations once and for all.’ Hilbert’s program involved (1) representing modern mathematical reasoning using formal axiomatic systems, and then (2) proving that these systems are consistent (that is, will never yield a contradiction) using only incontrovertible, ‘finitary’ methods. This would guarantee, in particular, that every concrete (and, in fact, universal) assertion proved using the new methods is in fact true; thus from one point of view one could interpret references to infinite sets and structures as ‘ideal’ instruments to facilitate the derivation of finitary, concrete results.

The term ‘formalism’ is usually associated to the claim that consistency of a formal system alone is sufficient to justify the use of the associated mathematical methods.  It is easy to criticize such a view, as did Brouwer: taking the viewpoint to the extreme, mathematics becomes nothing more than a game of manipulating symbols, with nothing to distinguish any one consistent symbol game from another. As a criticism of Hilbert’s program, this is partially unfair. Hilbert’s program did not commit him to the strong claim that mathematical assertions can have no meaning beyond the strict formalist reading above; rather, only the claim that having a finitary guarantee of consistency provides a certain degree of justification. But, of course, we can reasonably expect the philosophy of mathematics to tell us why certain mathematical practices are better than others, and so it is clear that formalism does not tell the whole story.

By the late 1920s, the divisive and often bitter debates over the justification and methodology of mathematics led to what has been called the ‘crisis of foundations,’ with formalism, intuitionism, and (to a lesser extent) logicism taken to be the central positions. None of these seemed capable of shouldering the burden on its own. Logic in and of itself did not seem sufficient to account for mathematical practice. Appealing to intuition as the final arbiter of mathematical knowledge made it hard to account for the objectivity of mathematical knowledge, and most mathematicians found Brouwer’s intuitionistic practice too constraining. Finally, formalism, while offering some insight into the notion of objectivity and rigor, failed to explain how the conventions of mathematics obtain their normative force. Once again, the problem boils down to that of giving a unified account of language, thought, and knowledge of objective mathematical facts.

One might hope to make progress by fitting an account of mathematics into a broader theory of scientific practice. The logical empiricist (or logical positivist) movement, for example, aimed to divide scientific knowledge into analytic and synthetic components. Roughly, the analytic component was to consist of those truths whose justification rests on commonly accepted conventions of scientific practice. This notion of analyticity was borrowed from Ludwig Wittgenstein, who used it to characterize logical tautologies as truths that simply reflect the proper use of language.

Logical empiricists extended the notion, however, to include mathematics, as well as scientific definitions and conventions. Viewing mathematics as a product of linguistic convention renders it, in a sense, epistemologically empty. Logical empiricists therefore took the nontrivial part of scientific knowledge to be contained in the synthetic component, which consists of assertions whose justification requires some form of appeal to empirical observation. In his famous attack, ‘Two dogmas of empiricism,’ Quine rejected the possibility of drawing a sharp distinction between the analytic and synthetic components.

Instead, he offered a form of empiricism in which the status of any particular claim has to be judged in the context of the entire theory, a position known as holism. On this view, mathematics loses much of its privileged status as a priori, necessary knowledge. Fundamental mathematical assertions are statements whose truth we assent to and may be very reluctant to give up, but like any other aspect of our scientific theorizing, may ultimately be revised to accommodate experience and theoretical developments in the future.

Quine also rejected any attempts to justify science on ‘first principles,’ that is, prior metaphysical preconceptions. Instead, he saw the philosopher as working within the framework of contemporary scientific knowledge, engaged in a task of methodological hygiene. That is, the philosopher’s task is to survey contemporary science and tidy up the language and conceptual underpinnings. Quine saw this view as a refinement of a naturalist philosophy found in nineteenth century writings of J. S. Mill. Quine’s landmark work, Word and Object, opens with a quote from a more recent essay by Otto Neurath, in which the philosopher’s task is compared to that of a shipbuilder forced to repair a ship at sea, gradually replacing and reshaping old beams rather than starting afresh.

As far as mathematics is concerned, on Quine’s view, one should grant ontological status to those mathematical objects that are needed to make sense of the best scientific theories we have today. As Hilary Putnam colourfully put it, it would be strange to accept the law of universal gravitation, which asserts that the ratio of the force exerted by one object upon another is proportional to the ratio of the product of their masses and the square of the distance between them, without believing in the existence of ‘ratios.’

Thus mathematical ontological claims are justified by, and only by, their indispensability to the sciences, modulo some allowances for abstractions that serve to round out the theory and support more general empirical values of economy and simplicity.

 

The philosophy of mathematics today 

To the present day, there have been ongoing attempts to adapt and strengthen early twentieth century attempts to ground mathematical knowledge. For example, the ‘NeoFregean’ program, developed by Crispin Wright and Bob Hale out of a proposal by George Boolos, is a modern form of logicism. First, neo-Fregeans replace the axiom of infinity by a ‘number of’ operator and a formal axiom, ‘Hume’s principle.’ The latter asserts that for any predicate, S, the notion ‘the number of S’ has an expected behaviour. Next, they show that against the backdrop of second-order logic, Hume’s principle suffices to derive the axiom of infinity, and hence a substantial portion of ordinary mathematics. On appeal to linguistic considerations, they argue that Hume’s principle (as well as second-order logic) can be considered analytic. The analyticity of the relevant portion of mathematics then follows.

Others have pursued different metaphysical strategies. Philosophers like Hartry Field have presented nominalistic, or ‘irrealist,’ accounts of mathematical objects, which aim to explain away references to abstract objects via various reinterpretations of mathematical language. Following a suggestion by Putnam, Geoffrey Hellmann has instead tried to account for a portion of mathematical reasoning in terms of an ontology of possible worlds.

A number of philosophers have used logical analyses to help clarify ontological and epistemological stances. For example, William Tait has tried to characterize the notion of finitism implicit in Hilbert’s work; Solomon Feferman has clarified the reach of a predicative mathematical ontology, which does not presuppose the totality of all subsets of an infinite set; Wilfried Sieg has clarified the assumptions needed to support the Church-Turing analysis of computability; and Michael Detelfsen has explored the philosophical presuppositions behind Hilbert’s program.

The latter half of the twentieth century brought alternative attempts to ground mathematical knowledge in historical terms. Philosophers like Imre Lakatos and Philip Kitcher argued that the appropriate justification for mathematical axioms and methods of proof is that they are the result of a rational historical process of mathematical invention and discovery. This shifts the philosophical burden to the task of developing a theory of rationality. Many are uncomfortable with such an approach, since, for one thing, it relativises mathematical knowledge to particular historical contexts. Furthermore, it seems to make mathematical knowledge contingent on haphazard historical developments, since it is conceivable that other ‘rational’ processes could have brought us to accept different axioms and methods. At issue is not so much the question as to whether this is in fact the case, but, rather, whether the types of philosophical explanation we seek should be cast in terms of such historical contingencies. Some have gone further to suggest that a proper account of mathematical knowledge should take even more into consideration, such as biological, social, political, and institutional factors.

In recent years there have also been efforts to expand, modify, or clarify the Quinean naturalistic framework. Recall that on the Quinean view, the acceptance of basic mathematical assumptions is justified, holistically, by their role in the physical sciences; on that view, parts of mathematics that are not currently required by these sciences are as yet unjustified. Penelope Maddy has recently proposed a variant of this view which she calls mathematical naturalism. Once one accepts that mathematics, as a whole, is useful to the sciences, she argues, one should evaluate mathematical developments by the internal standards of the community. For example, one may appeal to internal measures of simplicity and generality that may not always line up exactly with broader scientific values, but have proved to be useful for the development of mathematical practice. Thus, in a sense, mathematicians can enjoy a collective bargaining agreement with respect to the broader scientific community.

Attention to the actual practice of mathematics has raised additional philosophical issues. In a foundational essay first published in 1888, Dedekind observed that the question of what natural numbers like 2 and 17 are is largely unimportant to mathematics. What is important is that one is dealing with a structure equipped with a starting element, 0, and an injective function, which, given any number, returns its ‘successor’; so long as the resulting structure satisfies the principle of induction. Furthermore, Dedekind showed that any two structures meeting these criteria areisomorphic, so that references to one can be translated to references to the other without any further effects to the theory. This simple observation was revived by Paul Benacerraf in an essay ‘What numbers could not be’: Benacerraf tells a parable of two children who seem to have the same understanding of the natural numbers, but are shocked to find, one day, that their set-theoretic definitions of the natural numbers turn out to be different.

The point, again, is that the particular choice of definition is irrelevant; what is important is only the structural properties. Such an emphasis on structures has played a central role in twentieth century mathematics, supported by early algebraic work of Hilbert, Emmy Noether, and others, and the associated body of tools, viewpoints, and methods is usually gathered under the banner of ‘structuralism.’

From a foundational point of view, this suggests that one should aim for a characterization of mathematical practice that explains the independence just described. Category theory provides just such an account, analyzing mathematical language in terms of talk of structures and mappings between them, without concern for the nature of the particular elements of those structures. Philosophers like Steve Awodey and Colin McLarty have tried to spell out this philosophical understanding of mathematics. Stewart Shapiro, Michael Resnik, and Charles Parsons have, instead, explored the possibility of using structuralist ideas to fashion a metaphysics for mathematics, in which basic mathematical objects are understood as nothing more than ‘places’ in structures. There have been ongoing efforts to dissolve the knotty problems that come up when one tries to fill in the details.

Some of the most interesting work in recent years has been the result of a retreat from the ‘big’ questions of ontology and metaphysics, in favour of analyses of more particular, local features of mathematical practice. Some have tried to make sense of the way we carry out diagrammatic reasoning, which is not well characterized by deductive formalisms. Standard formalizations of geometry do not seem to explain how it is we understand an argument that makes reference to diagrams, or why it is that such diagrams can confer a better understanding than a purely textual proof. Marcus Giaquinto has therefore tried to find a place for visualization in the epistemology of mathematics. Other inquiries have tried to explain features of mathematics that become salient when one considers the subject’s history. For example, a recent collection of essays tries to make sense of the historical classification of mathematical arguments as ‘analytic’ or ‘synthetic’, terms that one finds already in the fourth century writings of Pappus.

Others have tried to make sense of various value judgements that are found in informal mathematical discourse. We have discussed nineteenth century emphases on ‘conceptual’ over algorithmic reasoning, and the phrase ‘conceptual’ is often used today as a term of accolade. Philosophers like Ken Manders and Jamie Tappenden have begun to try to understand these judgements. Aristotle distinguished between scientific demonstrations that show that something is true, and those that explain why something is true, and a good deal of work in the philosophy of science aims to provide accounts of scientific explanation. Philosophers like Mark Steiner and Paolo Mancosu have begun to develop theories of mathematical explanation along similar lines. Branches of mathematics that are designed for scientific applications tend to have features that are distinct from their ‘purer’ cousins; philosophers like Mark Wilson have tried to understand some of these features.

Such approaches share a number of common features. First, they explore issues that seem to have foundational, epistemological, or methodological interest, but extend beyond the narrow confines of a theory of truth and justification.  Second, they support the view that one must pay attention to both modern and historical mathematical practice to get a sense of the issues involved, even if one’s ultimate goal is a general theory that is independent of historical terms. Finally, they share a ‘bottom up’ approach to the philosophy of mathematics, which focuses on specific case studies and more restricted questions, in the hopes that over time a more global and unified theory will emerge. Such approaches do not represent so much a retreat from the traditional questions as the belief that such questions can best be answered in the context of a more robust theory of mathematical understanding.

Suggestions for further reading 

This survey is not comprehensive, and a number of important topics have been omitted. These references provide a starting point for further inquiry. The following provide helpful overviews of contemporary philosophy of mathematics:

Benacerraf, Paul, and Hilary Putnam, editors (1983), Philosophy of Mathematics: selected readings, second edition. Cambridge: Cambridge University Press.

Hart, W. D., editor (1997), The Philosophy of Mathematics. Oxford: Oxford University Press.

Jacquette, Dale, editor (2002), Philosophy of Mathematics: An Anthology. Malden, MA: Blackwell.

Schirm, Matthias, editor (2003), The Philosophy of Mathematics Today. Oxford: Oxford University Press.

Shapiro, Stewart (2000), Thinking About Mathematics. Oxford: Oxford University Press.

Shapiro, Stewart, editor (2005), The Oxford Handbook of Philosophy of Mathematics and Logic.Oxford: Oxford University Press.

The following books and collections provide an overview of the development of logic and the foundations of mathematics:

Beaney, Michael, editor (1997), The Frege Reader.  Malden, MA: Blackwell.

Ewald, William, editor (1996),  From Kant to Hilbert: a source book in the foundations of mathematics.Oxford: Oxford University Press.

Giaquinto, Marcus (2002), The Search for Certainty: a philosophical account of foundations of mathematics. Oxford: Oxford University Press.

Haaparanta, Leila, ed., The History of Modern Logic. Oxford: Oxford University Press, to appear.

Mancosu, Paolo (1998),  From Brouwer to Hilbert: the debate on the foundations of mathematics in the 1920’s. Oxford: Oxford  University Press.

van Heijenoort, Jean (1967),  From Frege to Gödel: a sourcebook in mathematical logic, 1879-1931.Cambridge: Harvard University Press.

For traditional views on the philosophy of mathematics, the following source book includes relevant works by Plato, Aristotle, Descartes, Leibniz, Locke, Berkeley, Hume, and Kant:

Cahn, Steven M. (1999), Classics of Western Philosophy. Fifth edition. Indianapolis:  Hackett  Publishing Company.

For some early twentieth century positions on the philosophy of mathematics, in addition to the collections above, see:

Russell, Bertrand (1993/1919), Introduction to Mathematical Philosophy., Minneola, NY, Dover Publications.

Ramsey, Frank Plumpton (1931), The Foundations of Mathematics and Other Logical Essays, edited by R. B. Braithwaite. London: Dover Publications Routledge & Kegan Paul.

Quine, W. V. O. (1970), The Philosophy of Logic, second edition. Englewood Cliffs, NJ : Prentice-Hall.

Quine. W. V. O. (1995), From Stimulus to Science. Cambridge: Harvard University Press.

Whitehead, Alfred North and Bertrand Russell, (1910-1913), Principia Mathematica, three volumes. Cambridge: Cambridge University Press. Second edition, 1925-1927.

Wittgenstein, Ludwig (1983), Remarks on the Foundations of Mathematics, revised   edition. Cambridge, MA: MIT Press.

Some contemporary work along traditional lines in ontology and epistemology include:

Burgess, John, and Gideon Rosen (1997), A Subject with no Object: strategies for nominalistic interpretation of mathematics. Oxford: Oxford University Press.

Detlefsen, Michael (1986), Hilbert’s Program: an essay on mathematical instrumentalism. Dordrecht: Kluwer Academic Publishers.

Hale, Bob, and Crispin Wright (2001), The Reason’s Proper Study: essays towards a neo-Fregean philosophy of mathematics. Oxford: Oxford University Press.

Hellman, Geoffrey (1989), Mathematics without Numbers. Oxford: Oxford University Press.

The following provides a variant of Quinean naturalism:

Maddy, Penelope (1997), Naturalism in Mathematics. Oxford: Oxford University Press.

For some uses of mathematical logic in philosophy, see:

Feferman, Solomon (1998), In the Light of Logic. New York: Oxford University Press.

Sieg, Wilfried (1994), ‘Mechanical Procedures and Mathematical Experience.’ In Alexander George, editor, Mathematics and Mind, Oxford: Oxford University Press.

Tait, William (1981), ‘Finitism.’ Journal of Philosophy, 78:524-546.

For various historical approaches to the philosophy of mathematics, see:

Aspray, William, and Philip Kitcher, editors (1988), History and Philosophy of Modern Mathematics.Minneapolis: University of Minnesota.

Grosholz, E., and H. Breger, editors (2000), The Growth of Mathematical Knowledge. Dordrecht: Kluwer Academic Publishers.

Kitcher, Philip (1984), The Nature of Mathematical Knowledge. Oxford: Oxford University Press.

Lakatos, Imre (1976), Proofs and Refutations. Cambridge: Cambridge University Press.

Otte, Michael, and Marco Panza, editors (1997), Analysis and Synthesis in Mathematics: history and philosophy. Dordrecht: Kluwer Academic Publishers.

For various structuralist views of mathematics, see:

Awodey, Steve (1996), ‘Structure in mathematics and logic: a categorical perspective.’ Philosophia Mathematica 4:209-237.

McClarty, Colin (1993), ‘Numbers can be just what they have to.’ Noûs 47:487-498.

Parsons, Charles (1990), ‘The structuralist view of mathematical objects.’ Synthese 84:303-346.

Resnik, Michael (1997), Mathematics as a Science of Patterns. Oxford: Oxford University Press.

Shapiro, Stewart (1997), Philosophy of Mathematics: structure and ontology. New York: Oxford University Press.

For some initial attempts to address broader topics in the epistemology of mathematics, see:

Avigad, Jeremy ‘Mathematical method and proof.’ To appear in Synthese.

Mancosu, P., J. Jorgensen, S. Pedersen, editors (2005), Visualization, Explanation and Reasoning Styles in Mathematics. Springer Verlag: Berlin.

Steiner, Mark (1978), ‘Mathematical Explanation.’ Philosophical Studies 34:133-151.

Tymoczko, Thomas, editor (1998), New Directions in the Philosophy of Mathematics: an anthology, revised edition. Princeton: Princeton University Press.

About Falah

Keepsmile and .... and... and....
This entry was posted in Sains. Bookmark the permalink.

Leave a comment