Problemos ISSN 1392-1126 eISSN 2424-6158

2023, vol. 103, pp. 174–186 DOI: https://doi.org/10.15388/Problemos.2023.103.14

Akademiniai maršrutai / Academic Itineraries

On Necessary Individuals and Ways (sic!) for Them to Be: Celebrating 10-Year Anniversary of Modal Logic as Metaphysics

Timothy Williamson interviewed by Pranciškus Gricius

_________

Copyright © Timothy Williamson, Pranciškus Gricius, 2023. Published by Vilnius University Press.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

_________

Timothy Williamson, the Wykeham Professor of Logic at the University of Oxford, is one of the leading figures in the contemporary analytic philosophy. His areas of research include philosophy of language, epistemology, logic, metaphysics, and metaphilosophy. Professor Williamson has authored over two hundred articles and numerous books, including such modern classics as Vagueness (Routledge 1994), Knowledge and its Limits (Oxford, 2000), The Philosophy of Philosophy (Wiley-Blackwell 2007, 2nd updated ed. 2021), and Modal Logic as Metaphysics (Oxford, 2013).

I had the pleasure to meet Professor Williamson at The 26th Oxford Graduate Philosophy Conference, and he kindly agreed to give an interview on matters of modality. The enjoyable and fruitful a few hour-talk that we had, which appears below slightly abridged, revolved around the history of modal logics, Saul Kripke and his works, the controversy between necessitists and contingentists, higher-order logics and metaphysics, and the influence of Modal Logic as Metaphysics.

The interview was taken on a rainy afternoon on Monday, the 14th of November 2022, at New College, Oxford.

PG: Why do modal notions (necessity, possibility, etc.) play an important role in philosophical inquiries?

TW: One simple thing to say is that philosophers typically want what they say to be necessarily, and not just accidentally or contingently, true. We can see that in the way that they take merely possible counterexamples to what they are saying very seriously and as potential objections. That would not be the case if philosophers were putting forward what they say simply as universal generalizations. So that is one generic reason why modality is important in philosophy.

Another reason, I think, has to do with the fact that modality itself presents both epistemic and metaphysical challenges to philosophers – it is not clear what is the place of possibility in reality, and it is not clear how can we know about things which are non-actual. One can see how serious these challenges are by looking, for example, at how difficult it was for such empiricists as Hume and Quine to make sense of modality. But the challenge that modality poses should, I think, be viewed as a constructive one, since it is pretty evident that we have a lot of modal knowledge, and so we have to understand and explain human cognition as something that can make sense of it, as something that can find out modal truths. A lot of modal knowledge of a very modest kind is just obvious (and this became really clear, in particular, when people started taking the modal aspect of semantics of natural language seriously). For example, sometimes it has to do with knowledge of what we can and can’t do in a practical way. But even in these simple cases, the features of modality that make it in some way mysterious – that is mysterious if you approach it from a certain philosophical perspective – are already present, for even in these simple cases one can ask the challenging question of how can we know that something that didn’t happen nevertheless could have happened. And, of course, all these issues interact with debates about free will and a series of other philosophical issues.

PG: We want our philosophical theories, you’ve claimed, to be necessarily and not just accidentally true. What are the implications of this fact for our methods of doing philosophy?

TW: Well, for one thing, it helps to explain why thought experiments are relevant in philosophy. If we were considering a generalization which is only accidentally true, then the fact that we could imagine a counterexample to it would not be a threat. A trivial example: if one were to claim that there are in fact no elephants in this room, then it wouldn’t help in deciding whether that is true if we could imagine an elephant in this room. But if someone claims that it is a necessary truth that there are no elephants in this room, then the mere possibility of one is a counterexample to that claim. Something similar happens, of course in much less trivial ways, in all sorts of branches of philosophy.

Also, I think it puts some limits to crudely naturalistic sort of methodology. For example, if we are interested in necessary conditions for knowledge, then it is not enough to have generalizations about humans, because possibly there is knowledge which is beyond human knowledge. In fact, in this case, there are a lot of actual cases of such knowledge, e.g. animal knowledge.

PG: You’ve also claimed that modality poses challenges in its own right. I would like to turn now toward the study of modality itself and, in particular, to modal logic and its history. What were the main highlights in modal logic in the last century?

TW: First thing to say is that, although modal logics were developed both by Aristotle and in the medieval period, at nineteen hundred, there was hardly any modal logic going on. A very early figure was Hugh MacColl, but probably the beginning of the systematic inquiry into modal logic within analytic tradition was the work of Lewis and Langford. Then, in 1947, Carnap’s Meaning and Necessity was published, which was important as the first properly systematic approach to the semantics of modal languages and the use of compositional and intensional semantics. It was the precursor of possible worlds semantics, although Carnap had a linguistic version of possible worlds which he called state-descriptions. Nonetheless, he himself did draw attention to the fact that these state-descriptions were modern analogues of Leibniz’s possible worlds.

And then, above all, the work of Kripke on possible worlds semantics (though I should mention, that he wasn’t an isolated figure in developing it). His semantics is mathematically much more tractable than Carnap’s, and it turns out that making possible worlds models mathematically more tractable gives them a structure that can easier be read ontologically than, say, Carnap’s state-descriptions. Also, my hypothesis is that the metaphysical turn was to some extent inspired by the work that Prior was doing in the 1950s on tense logic in which you have operators like ‘in the past’, ‘in the future’, and so on. It is very clear that these temporal operators are concerned with metaphysical structure and not simply with linguistic matters. That, I think, provided a kind of template for Kripke’s work which took possible worlds more seriously than just the kind of linguistic descriptions that they were in Carnap (though I should also note that I don’t think that that requires going to something like David Lewis’s modal realism). The idea that what possible events there are is something that is constrained not by linguistic rules but by reality itself is analogous to the idea that what past and future events there are is constrained by reality and not simply by language. In the case of time, this idea is very obvious, but, in the case of modality, this idea wasn’t obvious at all, well, at least in the 1950s, before Kripke’s work on modal logic.

Also, there was a lot of mathematical development which had some relation with the philosophical developments, in which all sorts of people like Gödel, Ruth Barcan Marcus, and so on, contributed by developing more streamlined, better working axiomatic systems. And then Kripke’s work with completeness theorems tied those two traditions together. After that, in the sixties and seventies, you’ve got a period with a massive mathematical development where these notions of completeness and all sorts of more technical things were applied to a vast range of modal logics. Also, other applications of modal logics were developed, e.g. deontic logic and epistemic logic – the latter being an application of modal logic which probably had the most impact outside both philosophy and mathematics because it has been widely used in computer science and theoretical economics. There was also a lot of interaction between modal logic and the semantics of natural languages, which goes back to Montague in the fifties, and then in the seventies in the work of people like Kratzer who have made it a fairly canonical dimension of the semantics of natural languages.

In general, in the sixties and seventies, modal logic was becoming pretty much the canonical framework for analytic philosophy. It was simply part of the toolkit of analytic philosophers – they were quite often thinking in terms of possible worlds, and, when they wanted to formalize philosophical theses, they would do it using quantified modal logic, whereas, in the fifties, they would have been using quantified non-modal logic. That continued thereafter in the twentieth century.

Another thing to note is that there has been an increasing divergence between mathematical and philosophical developments. It turns out that there are uncountably many systems of modal logic, and only very few of them seem to be philosophically interesting. The few that are philosophically interesting almost always have fairly simple semantics and don’t give rise to the more complex mathematical phenomena, such as incompleteness. But, I think, it is a typical development in logic: often, philosophically, the most interesting results are the ones that are established relatively early in the development of the subject, and then, as the subject goes on, it gets into things that are more and more mathematically complicated but that have fewer and fewer philosophical rewards to them.

PG: And what were the main technical mathematical advancements of the semantics that Kripke developed that made it possible to investigate the metaphysics of modality?

TW: Well, the standard way of doing the semantics of some kind of language we are interested in for logical purposes is to define some mathematical precise notion of a model of the language. The models of the language are thought of as different interpretations of the language. These interpretations agree on the basic structural matters, but they are allowed to vary in their interpretation of the non-logical aspects of the language. This approach has been immensely successful in non-modal logic. So that is an idea of a model. There was also an idea of a possible world which has ancestry going back to Leibniz. And, in the early stages of the semantics of modal logic, the idea was that we should, in effect, identify models with possible worlds, i.e. possible worlds were understood, as it were, as an informal way of thinking about these formal models. That is what Carnap did, and that was what was typically done in the 1950s.

What Kripke did was to separate models and possible worlds. He treated those as completely different categories. His idea was that each model comes with its own set of what we can call possible worlds. Of course, from a mathematical point of view, we just have this set W and we do not say anything about the nature of its members. But informally we are thinking of members of W as possible worlds. So, each model, each interpretation of the language carries with it its own set of possible worlds. Again, it is helpful to think about the analogy with the logic of time – we would never identify times with models, or models with times. But, in the fifties, it seemed quite a reasonable thing to make this identification between models and possible worlds. It turned out that the key insight is that you’ve got to treat these as, in a way, independent dimensions. Once you’ve done that, for very technical reasons, suddenly, the model theory simplifies and works much better. It gives you more flexibility – that’s the main thing that you get by treating them as different dimensions. And that is really what Kripke saw in his early work at the end of the fifties and early sixties.

Although there were a bunch of other people at the time, well, at least a few other people who had quite strong mathematical connections with what Kripke was doing, as far as I can see, none of them had the same clarity of vision of how to structure semantics of modal logic from the mathematical point of view. Also, none of them had the same clarity of how to make sense of it philosophically and metaphysically in the way that Kripke had. And, by the time you’ve got the models that Kripke describes in his 1963 paper Semantical Considerations, you’ve got the models that we use today, i.e. the models described there are pretty much the models that you would teach to students. Five years before that paper, no one had that kind of understanding of what was going on. That was a decisive change and an advancement in our understanding of the foundations of modal logic.

PG: So, it seems, that modal logic is and can be approached from two perspectives: one can study it from a purely mathematical point of view, and, on the other hand, one can approach it from a metaphysical point of view. Sometimes, when these issues are discussed, the distinction between intended and unintended models comes into play. Could you elaborate on these issues?

TW: So when we are approaching the subject from the point of view of metaphysics, we have in mind some particular level of modality – the metaphysical modality. In terms of the symbols of the formal language, we’ve got the box for necessity and the diamond for possibility, and we want to fix the interpretation of those operators. Whereas, when you look at the mathematical development, obviously, from the mathematical point of view, you do not want to be talking about metaphysical possibility because that is not a mathematical idea at all. What that corresponds to in Kripke’s approach is that models of modal logic have a set W which informally we think of as a set of possible worlds, but formally that plays no role in the mathematics. It is just that formulas are evaluated as true or false relative to members of this set W. But there isn’t any particular account of what members of W are, and that means that, at the mathematical level, we are in effect leaving the interpretation of the modal operators unspecified.

The intended model (if there were a unique one, and that is quite a tricky question) is one that would interpret modal operators in the specific way that we are interested in; in such a model, they would mean what we want them to mean for metaphysical purposes. That involves deciding which particular set we want W to be. Instead of just leaving W as a kind of variable which could be any non-empty set, we want to specify a particular set W that corresponds to our understanding of the modal operators. If we are interested in metaphysical modality, then we would say that W consists of the set of metaphysically possible worlds, and, of course, different metaphysicians have different ways of explaining exactly what that means. Just to give a contrast: if we are interested in the logic of time, the many aspects of the formal modal framework are the same, but there we would be thinking of the relevant set – the set relative to which the formulas are evaluated – as the set of times rather than a set of worlds. So the intended model is the one where we fix the interpretation W for a specific, say, metaphysical, purpose.

The distinction between intended and unintended models is not something that started with modal logic. It is, for example, very important when we are thinking about models of arithmetic because ordinary first-order arithmetic has lots of unintended models. Arithmetic is supposed to be the theory of operations like addition and multiplication on the natural numbers, i.e. zero, one, two, …, namely, the finite cardinal numbers. So, on the intended interpretation, any member of the domain of natural numbers is one that you could get to from zero by applying the successor operation a finite number of times. However, there are a lot of unintended models which make all the axioms of arithmetic come out true, where you are interpreting natural numbers as a set that includes lots of members that can’t be reached from zero in a finite number of applications of the successor operation. So in thinking about the interpretation and model theory of arithmetic, we have to make this distinction between intended and unintended models. This distinction applies in a similar spirit to the case of modal logic. The big difference is that, in the case of arithmetic, we already know exactly what the structure of the intended model should be (or, actually, strictly speaking, in arithmetic what we have is a class of intended models, but they all have the same structure as each other, so it doesn’t really matter which one you choose). As it were, what we are concerned with is simply the difference between any old model that has the right basic features and those specific ones which correspond to the way we want to interpret the language for the purposes of some particular inquiry.

PG: And the debate you are most famous for introducing into one such particular inquiry, inquiry into modal reality, is the controversy between necessitists and contingentists. What are these positions?

TW: So, roughly speaking, necessitism is the position that ontology is necessary. It is the view that it is necessary which things there are, where the necessity that we are concerned with is the strongest kind of objective necessity, which you might identify with metaphysical necessity. And contingentism is the view that it is contingent which things there are. It turns out that these positions can be more or less encapsulated in single formulas of the language of quantified modal logics: necessitism is just the formulae which say that necessarily everything is necessarily something, and contingentism is the negation of necessitism. To get the full force of these principles, quantifiers ‘everything’ and ‘something’ have to be understood as completely unrestricted so that they range over absolutely everything.

To make the contrast a little bit more vivid: one difference between them is that most contingentists (though that is a typical but not an absolutely necessary feature of contingentism) think that there could have been more or fewer things than there actually are, whereas necessitists think there is no contingency in, roughly speaking, how many things there are. Or, let us consider a particular example: the ontology of events. We can take an event that occurred, for example, the First World War. What everybody, or perhaps almost everybody, agrees on is that it is contingent that the First World War occurred, i.e. that all those battles and so on occurred. The contingentist thinks that if the First World War had never occurred, then this event which we call ‘the First World War’ would not have existed at all – there would have been no such thing. Whereas, the necessitist thinks that that entity still would have been something, but it would not have been an occurring event – it wouldn’t have been, if you like, realized or concrete. There still would have been, according to necessitists, such an entity in this abstracted sense. However, it would not have been a war, because, of course, a war can only occur if it’s fought. It would have simply been a possible war.

PG: And you defend necessitism. What are the main arguments for this view that you propose?

TW: There are various sorts of considerations. One is this: take the standard semantic framework for talking about modal logic, the one which goes back to Kripke. From a technical point of view, this semantics can handle contingentism. But if you have a contingentist version of this semantics, it works in a way which does not take the semantics very seriously because the models that you need to represent the idea that there could have been things which as a matter of fact do not exist are ones where those possible things are represented by actual objects in the model, and so the model isn’t really the intended model. You cannot have an intended model of that kind of contingentism. Of course, that is not by itself decisive because you could take a lesson to be that one should reject a fully realist attitude to the semantics. One could say that this semantics is representing what we are thinking in some way or other, but it is not fully faithful to it. So when contingentists try to explain their position, they get into all sorts of difficulties when they try to talk about what there could have been but isn’t. It is very difficult for them not to be attributing properties to something that they themselves think does not exist. They are talking about things which, according to them, are such that there are no such things.

Another consideration in favor of necessitism is that if one is thinking about these different approaches as, in a broad sense, scientific theories, necessitism is much simpler and more elegant than contingentism. It is hard to argue with that. So it has the advantage – as long as there is no sort of evidence which decisively refutes necessitism. Also, the necessitist does have a way of explaining what is going on with the apparent counterexamples to it.

I think there are a bunch of other considerations of a semi-technical kind that have to do with the superior unity and elegance of necessitism. Furthermore, it is often the case that contingentists help themselves to necessitists’ ways of thinking because they are so much more convenient. And then contingentists try to explain on contingentists’ terms why it is OK to do so, but I think those explanations tend to break down. One can argue for this claim in quite a lot of detail. I’ve spent hundreds of pages in Modal Logic as Metaphysics doing so.

PG: An interesting feature of the debate between necessitism and contingentism is that, although the debate officially is about whether what individuals exist is a matter of necessity or not, higher-order modal logic is really important in this controversy. What are higher-order logics and why are they so important in this debate?

TW: Higher-order logics, as I understand them, are logics where you can generalize not only on names, but also on predicates, sentences, and so on. Roughly speaking, in higher-order logics, every grammatical position is a potential locus of generality.

In fact, even before one gets to modal logic, there are various areas where higher-order logics are better than first-order logic (the latter being one where one can generalize only into name position). For example, I was talking about the fact that there are unintended models of arithmetic. I was talking about first-order arithmetic there. However, you can exclude those models which do not fit our intention if you go higher-order, whereas you cannot exclude them in first-order logic. Roughly speaking, what is going on there is that the principle of mathematical induction – that is the principle that if both zero has a property and for any natural number if it has that property, then its successor has that property, then every natural number has that property – cannot be captured in first-order logic. You cannot capture the full generality of that principle in first-order logic, because that talk of, what I loosely called, properties is really one that has to be understood as involving generalizing into predicate position rather than name position. There is something similar with set theory as well – there are various set-theoretic principles whose full generality can only be captured in a higher-order framework.

And so one way in which the higher-order framework is important to modal logic and necessitism versus contingentism dispute is this: when you formulate natural principles of higher-order logic, when you express how we want the generalization into predicate position to go, and particularly how we want these generalizations to go given that we are interested in applications of it in mathematics as well, you can see that we really want to be generalizing over any old property – we want to allow any meaningful predicate whatsoever. We do not want to exclude any of them because they are not natural enough or anything like that. When you are doing higher-order generalizations in mathematics, you are not concerned with whether, to put it crudely, the property is a natural one or not. Any property of natural numbers is subject to, for example, the principle of induction.

When you build this idea into higher-order logics in a modal setting, that is when you combine higher-order generalizations with possibility and necessity operators, what you find is that the analogues of necessitism for all these higher-orders are consequences of the principles that you need for the higher-order generalization to be working in the right way. That is one of the ways in which necessitism at the first-order is much more natural because then all orders work in the same way. Whereas if you are a contingentist (in the sense that I explained) and if you combine that with a good higher-order theory, then your higher-order theory will be necessitists at all the higher-order, but contingentist at the bottom one. There is something ugly and implausible about such a contrast between the bottom order and all the higher-orders.

So higher-order languages enable us to put the dispute between necessitism and contingentism in a much more general setting, and, within this more general setting, we can find a lot more considerations for going one way rather than the other between necessitism and contingentism. Whereas if we would just confine ourselves to a first-order language, there would be too much that we can’t express, and we would not be able to express all the relevant considerations.

PG: And what is the nature of entities that we are quantifying over in higher-order logics?

TW: When approaching this issue, I think, the key thing to say is that – when we are talking about the meaning of a higher-order language – we should be doing it in a higher-order language ourselves. If we try to talk about it in a first-order language, if we try to explain everything in a first-order language, we are in effect reducing the higher-orders to the first-order, and then we are actually throwing away all the advantages of the higher-order framework. Now the trouble with natural languages is that they tend to force all generality into first-order generality. The expressive limitations of natural languages make it difficult to do justice to higher-order meanings.

And so when we ask what kind of entities do properties, relations, and state of affairs, and so on correspond to?, that question is really putting things in a first-order way. Well, we are generalizing over entities, and ‘an entity’ is just a normal noun, and so we got only one kind of quantification going on there. When approaching these matters, one should not think in terms of a taxonomy of an all-encompassing category of entities. Rather each grammatical category has to be taken in its own right, and it is not to be reduced to others. And although I’ve been sometimes talking about properties because I was talking in English (a natural language) and that is a convenient way of talking, really, a more rigorous way of thinking about that is that we are talking about generalizing into one-place predicate position.

This may sound sort of restrictive and sort of unforthcoming about what we mean, but these restrictions already arise outside modal contexts. For example, when thinking about the fact that in metaphysics we want to generalize over everything, it is very difficult to avoid contradictions if one has a notion of quantifying over everything, where that is somehow supposed to encompass generality over all grammatical positions. In this area, we know that every approach has to apply restrictions in some way in order to avoid contradictions which are all related to Russell’s paradox in set theory. There are various analogous paradoxes that can arise in a higher-order setting. So we have to respect these distinctions between grammatical types and not try to generalize from one type to another because that always involves some sort of reduction which, once we combine it with unrestricted generality, will get us a contradiction.

I am certainly not the first person to suggest this approach. We can find something similar in spirit in Frege. When he is talking about the difference between, in his terminology, concepts and objects, Frege is very aware that it makes it look as though concepts are just a special type of objects. But Frege is very clear that that is not the right way to understand what he is doing, and he specifically says that his explanations have to be taken with a grain of salt because they are not literally accurate. This approach of understanding higher-order languages from the perspective of a higher-order language or metalanguage is really, I think, quite Fregean in spirit.

PG: And what if I would frame the question thus: what is the nature of ways for things to be? Where by ways I would try to pick out what monadic predicates denote. Am I still reducing it?

TW: Yes! Because, well, ‘way’ is just a noun. It is as much a noun as ‘cat’ or ‘dog’, and so, from a grammatical point of view, asking “What are ways to be?” is similar to asking “What are cats?” and “What are dogs?”. I think the same thing is going on if one uses the kind of metalinguistic route that you were also suggesting because, if you talk about something like “what predicates refer to,” it is very difficult for that what not to be reduced to “what things predicates refer to.”

And so, I think, when we are speaking or writing a higher-order language, we are doing something that you can’t really do in any full-blooded way in a natural language. You can’t learn or understand higher-order languages by translations into a natural language. You just have to get into the language in the same way that when a child learns their native language they are not doing it by translating it. They are just learning it by the direct method.

Of course, some people are worried that then – maybe – we are just talking nonsense, and so one might ask “how do we know that we are not straying into nonsense?”. But one thing that is clear in general about worries about nonsense is that one cannot impose the restriction that before you start speaking in a certain way you have to justify or vindicate the meaningfulness of that way of speaking. Because if we had such a constraint, we could never start speaking since if we would try to vindicate one piece of language, we would have to use another piece of language to vindicate it. So we would get some kind of infinite regress. The right approach is, I think, just to go ahead and work in higher-order language. If that is sufficiently fruitful and explanatory, then that is all the evidence we need for knowing that it is meaningful. After all, if we were simply talking nonsense when we talked higher-order language, we would get into all kinds of trouble. I think this is pretty much the attitude that is taken in science. When people started talking about gravity, of course, there was a question: “does that really mean anything?”. But it turned out to be an incredibly fruitful way of talking. I think the scientific successes which came out of talking about gravity were enough to establish beyond serious doubt that the word ‘gravity’ did have some coherent meaning. After all, if you use a term with no coherent meaning, the chances of it working out scientifically are not bright, to say the least.

PG: But what if I were to say: Ok, the fact that higher-order logic is sufficiently fruitful and explanatory provides us enough evidence that it is meaningful. But still, from the metaphysical point of view, I want some kind of story about what is going on when we speak this meaningful language. For example, David Lewis provides such a story about what propositions, properties, etc., are. Yes, of course, Lewis reduces them to individuals and set-theoretic constructions out of them, but still, he tells me some story or other. So what if I grant you that higher-order logic is meaningful, and still ask what is the metaphysics behind it?

TW: Well, David Lewis took over from his Ph.D. supervisor W.V. Quine a highly reductionist framework, and Lewis’s way of making sense of modal logic involves reducing it to first-order terms that are completely fine logically from Quine’s point of view (even though Quine does not like all these alternative concrete worlds that Lewis postulates).

But we have to be very careful that these demands to explain what you are talking about are not simply a way of smuggling in some kind of reductionist agenda. I think a very good model to think about is set theory because set theory is the nearest thing we have in mathematics to a generally accepted foundational theory. However, if you ask mathematicians: ‘but really what is a set’? I think all that they can do is to tell you what the axioms of the set theory are. Of course, there are attempts to say what a set is – well a set is a bit like a collection, it is a bit like a group, and so on – but none of these attempts are accurate because they do not convey the mathematical idea of a set. The axioms of set theory, on the other hand, are incredibly informative about how sets work and what sets there are. These axioms don’t answer all questions, as we know, because they do not determine whether the continuum hypothesis is true, but they answer a vast range of questions. They are informative enough that we can do virtually all mathematics on the basis of them. I think that is a much better model of what clarification should consist in. We expect something similar in physics. When we ask a physicist what are electrons, the best that they can do is just to give us a whole lot of currently accepted laws and principles about electrons.

The same holds in higher-order logic. We should be willing to say what principles of higher-order logic we accept, and it turns out that these can be very powerful principles. And if anybody says – but, well, you haven’t really answered my question – I think, then one can hit the ball back over the net by saying You clarify your question first; you tell me what kind of answer you would be willing to accept. My suspicion would be that it will turn out that, given what constraints he put on an answer, either those constraints are already met by the kind of things we’ve already said so far, or else there is some sort of reductionist agenda in them which makes them simply illegitimate demands. I think the line between reasonable and unreasonable demands in metaphysics should not be radically different from that in any natural science or in mathematics.

PG: Almost ten years passed since the publication of Modal Logic as Metaphysics, the book in which you defend necessitism and elaborate on these issues of higher-order modal logics. What were the main developments that issued from it?

TW: One thing that the book I am fairly confident has done was to move the necessitist view into the mainstream. I certainly would not claim that it has become the majority view, but I think quite a lot of people accept it these days. It became one of the standard alternatives in the area. But, in general, I think the influence of that book has been, and the direction that the discussion in this area has taken, not one of trench warfare about necessitism versus contingentism but the development of the much broader themes which I was bringing to bear on this particular issue. For example, there has been a lot of work on higher-order metaphysics. All sorts of people have been working in that area, e.g. very recently I reviewed an interesting book by Cian Dorr, John Hawthorne and Juhani Yli-Vakkuri The Bounds of Possibility which is on issues about identity across different possibilities. They are working within a higher-order framework. Andrew Bacon is another person who has been developing a higher-order framework. I think that that has been a very important development, and it goes way beyond the area of modal metaphysics. In my view, higher-order logic is a much better framework in which to think about problems that were traditionally discussed under headings of universals and particulars, and nominalism versus conceptualism versus realism. So that is one kind of development. Another kind of development which actually is closely related to the first has to do with different types of modality. In particular, with what one might call objective modality (i.e. modality which doesn’t concern, for example, our state of knowledge), with metaphysical modality as perhaps the top level, though some people think there is an even higher level of logical modality, however, I myself am skeptical about the idea of logical modality.

So the biggest developments, I think, happened at that more general level, and I do think that that had something to do with the publication of Modal Logic as Metaphysics. In the long run, these developments will also cast light on that very specific issue of necessitism versus contingentism because by thinking about it within a very broad logical framework we can bring more and more considerations to bear on it. For me, that was the most striking and gratifying development. Well, it is a development that is likely to go not really fast because the technicalities involved in higher-order logic are harder than the technicalities involved in modal logic. But just as in the sixties and seventies it became standard for philosophers to articulate philosophical views in a modal language, so, I think and hope, that it will become increasingly standard that people will be willing to formulate philosophical theses in higher-order language. A bit of that, of course, has always been going on, but I think it’s got the potential to be much more widely applicable than has happened so far.

PG: And what would be your advice for anyone who wants to enter these fields of inquiry that we have discussed today?

TW: First of all, there is a certain amount of logic to be mastered. In the case of modal logic, there are lots and lots of textbooks. In the next few years, I think, we are going to see an emergence of more philosophy-friendly textbooks of higher-order logic. Thus this should become an easier area to enter than it has been. There are some technicalities that you need to master for doing philosophy, but the most complicated things are not in fact especially philosophically relevant (though occasionally they are).

Another thing to note is that one shouldn’t think of what we are doing when we are doing philosophy in the traditional analytic philosophy terms – philosophy as some kind of conceptual analysis, that we are concerned with what’s intuitively plausible, and so on. Those are not very helpful perspectives for getting into these fields. It is better to think that in the areas we’ve discussed today we are theorizing. When we develop more expressive languages, they have more and more questions that common sense just really hasn’t thought about, and that is particularly the case in the higher-order area. It is so difficult for these higher-order notions even to be captured in natural languages. So we should think that what we are doing is similar to a science where we are opening up new domains that have simply been previously unexplored. And we certainly shouldn’t expect that commonsensical ways of thinking are so authoritative here. Of course, it is not that good scientists throw common sense out of the window – after all, when they are recording data and so on, they are still using some kind of common sense assumptions, e.g. that they are not hallucinating their experiment. So we should understand theories that are being put forward in these areas of philosophy in roughly the way we understand theories in natural science (in fact, I think this applies to mathematics, too, although it is much less obvious). Theories are being put forward to some extent in a hypothetical spirit, and they are appropriately evaluated in what is known as an abductive way, which means, very loosely, that there should be an optimal tradeoff between simplicity and strength (where strength is understood as informativeness). Theories in these areas are constrained by evidence like any other theories. Also, rather than tailoring higher-order logic to some kind of pre-theoretic conceptions of what properties are, I think that it would be much better to first of all look at the way higher-order considerations are already operative in mathematics, e.g. by looking at the way in which higher-order theories in arithmetic and set theory are actually better at capturing the intended structure than first-order theories. We should be theorizing in that sort of spirit – rather than formulating theories that are as cautious and weak as possible in their content, we should be putting forward, in a hypothetical spirit, extremely strong informative theories with lots and lots of possibly controversial consequences in order to assess them in the way that corresponds to the way that other areas of science proceed.