The classical rule of Repetition says that if you take any sentence as a premise and repeat it as a conclusion, you have a valid argument. It’s a very basic rule of logic, and many other rules depend on the guarantee that repeating a sentence, or really, any expression, guarantees sameness of referent, or semantic value. However, Repetition fails for token-reflexive expressions. In this paper, I offer three ways that one might replace Repetition, and still keep an interesting notion of validity. Each is a fine way to go for certain purposes, but I argue that one in particular is to be preferred by the semanticist who thinks that there are token-reflexive expressions in natural languages.

1. The Problem

Repetition is a good candidate for the simplest example of a valid argument form: take any sentence, repeat it, and you have a valid argument. Indeed, it is hard to imagine a reason for contemplating a logic that invalidates this simple rule: if we cannot be sure that repeating a sentence preserves truth values, how could Modus Ponens work? Or Double Negation? Or, indeed, any rule that relies on several appearances of the same sentence? Repetition looks trivial, and its failure looks unacceptable.[1]

However, as we shall see, Repetition fails for token-reflexive expressions. What does that show? A pessimist may conclude that there is no logic of token reflexivity, at least not one that allows some arguments to count as valid. If we add the Montagovian view that the semantics of a natural language is incomplete until a notion of entailment has been defined, this would seem to show that there are no token-reflexive expressions in natural language (or that Montague was wrong, of course).[2] The extreme pessimist may fall into nihilism about logic in general: given that logical rules are supposed to be exception-less, and that token reflexivity provides counterexamples to logical rules, they may conclude that there is no notion of logical validity.[3]

In this paper, I discuss three good ways to deal with this problem. First, adapting a proposal from French (2016), one could invalidate Repetition in general, thus obtaining a non-reflexive logic. This would yield a logic where no argument is valid tout court, but we still get meta-arguments that are valid, that is, some arguments will count as valid provided that some other arguments (which contain expressions from the former arguments) are valid. This would allow us to filter out troublesome token-reflexive expressions, and would be, arguably, independently motivated.

The other two ways to deal with our problem are more conservative, in that they allow us to recover something like Repetition. The first, proposed in Kaplan (1999), works for all token-reflexive expressions, but does not really allow the repetition of the same sentence, and does not seem to be the way natural languages work. The second, a modified version of the proposal in Radulescu (2015), works only for some such expressions, and recovers Repetition only in a restricted way even for those expressions. Still, I argue that both ways yield rules that are worthy of inheriting parts of the role of Repetition, and that the last proposal, restricted though it is, may well suffice for natural languages.[4]

One last prefatory remark: there are several notions that “token-reflexivity” picks out in the literature. The weakest merely requires that expressions not be assigned a referent unless they are tokened, or, if you prefer, that only tokens be assigned referents, rather than untokened expressions. For instance, one might argue, along the lines of Strawson (1950), that all expressions are token-reflexive in this sense because the proper job of semantics is to assign referents to uses of expressions.[5]

Another classical use of “token-reflexivity” comes from Reichenbach (1947). There, the claim is that indexicals are synonymous with descriptions that refer to tokens of those expressions. I believe that Kaplan (1989) has shown that this thesis is false: the descriptions end up introducing much more information into the propositions than indexicals do.

I intend a notion that is somewhere between the two above; in some sense, a better label would be “token-dependent”, as a parallel to “context-dependent”, but I will keep the label commonly used in the literature. My goal is to discuss expressions whose tokens may have different referents, just like context dependent expressions are ones whose referent varies with the context. The idea here is that each token of these expressions is assigned a referent by (ineliminably) relating that very token to its referent.[6] This will turn out to matter in our discussion of proper names (Section 6), which one may coherently claim to be both token reflexive in the first, weak, sense, and not token reflexive in our sense.

2. The Failure of Repetition

My first goal is to show that Repetition fails once token reflexivity is allowed, and to introduce a purely token-reflexive expression. Consider \(\theta\), an expression whose tokens all refer to themselves:[7]

(1) \(⟦ \theta_{\underline{n}} ⟧ = \theta_{\underline{n}}\)

Notation gets tricky here; I intend the subscripts on both sides to belong to the metalanguage, not the language itself. I mark this distinction by underlining meta-language subscripts, and having object language subscripts that are not underlined. The reason for the distinction is that we are not positing an ambiguous expression that gets disambiguated by subscripts; rather, we have a single expression, about which we can say in the metalanguage that its \(\underline{n}\)-th token refers to itself. I am also ignoring other kinds of relativization: to a context, an interpretation, a variable assignment, etc.

Consider now the following appplication of Repetition:

(2) \(F\theta \therefore F\theta\)

(2) is not valid, since each token of \(\theta\) refers to itself, so the first one may belong to the extension of \(F\), and the second not.[8]

3. Giving Up Repetition Need Not Be a Tragedy

Suppose one gives up Repetition, and along with it all hope that expressions maintain their referent throughout an argument. At first glance, that might seem tantamount to giving up the idea that one can construct a logic for such expressions.

Even if this is a radical proposal, it does not lead to logical disaster. French (2016) contains a proposal to give up Repetition in order to deal with the paradoxes of self-reference, like the liar and Curry paradoxes. The logic one gets has no valid arguments tout court, but it does have valid meta-arguments. That is, there is a notion of an argument being valid provided that some other arguments are valid.[9] For example, while \(p, q \therefore p\land q\) is invalid, the following meta-argument is valid: \(p \therefore p, q \therefore q \Rightarrow p, q \therefore p\land q\). Effectively, one thus filters out troublesome sentences, while allowing some notion of validity.[10]

Adapting French’s proposal for token-reflexivity may take some work. His original idea was to filter out troublesome sentences. Once token-reflexives enter the scene, we need to talk about tokens. A simple adaptation of the meta-argument above would say, for instance, that if for any tokens of \(p\), if you use one as a premise and one as a conclusion, you get truth preservation, and if the same happens for \(q\), then for any tokens \(p_{\underline{m}}\) , \(p_{\underline{n}}\) , \(q_{\underline{i}}\) , and \(q_{\underline{j}}\) , the following also preserves truth: \(p_{\underline{m}}, q_{\underline{i}} \therefore p_{\underline{n}} \land q_{\underline{j}}\).

This way of talking about meta-validity does not provide anything interesting for Repetition, since it basically relies on Repetition as a test for sentences (and their tokens). But perhaps Repetition just is the price to pay for recovering some notion of validity.

The main advantage of this proposal is that it is independently motivated: French argues that one could address the paradoxes of self-reference, but also mentions the independent proposal of Correia (2014) as an application to a logic of grounding. So applying it to our problem, generated by token-reflexivity, is unlikely to look ad hoc.

French (2016: Section 5) also proposes a reading of the logic as saying that ordinary arguments are enthymematic. Though his reading, which is in terms of having to accept or reject a sentence, wouldn’t work for my purposes, one could propose something like the following: ordinarily, one neglects the possibility that the terms one uses are token-referential. So we make our usual inference patterns valid only by filtering out those expressions. So, in a sense, one would think of the ordinary notion of validity as conditional on the proper behavior of expressions, which cannot be ensured for token-reflexive expressions.

Many details are missing here, of course. First, French’s proposal is within sentential logic, and the token-reflexive expressions I am focusing on live in predicate logic. Second, as he acknowledges, work remains to be done in figuring out what to say exactly about the paradoxes he was interested in. But for my purposes, there is a more immediate reason not to be satisfied with this solution as the only way forward: this logic recovers a notion of validity by, in effect, filtering out referentially fickle expressions. But we do ordinarily offer arguments that contain token-reflexive expressions, and it would be good, if possible, to offer a notion of validity for them.

4. Recovering Repetition, the First Attempt

The only way to guarantee that something like Repetition applies to all token reflexive expressions is to introduce new expressions, guaranteed to corefer with the token-reflexive expressions: canonical names, or variable-like devices that can be bound cross-sententially by tokens of such expressions.[11]

Here is one way to do it. We will need to enrich our language with a device for creating canonical names: we introduce \(\theta_n\) , one such name for each of the \(n\) tokens of \(\theta\) in an argument. These canonical names are names of what the token-reflexive expressions refer to. Of course, in the case of \(\theta\) a canonical name of \(\theta_{\underline{n}}\) will corefer with \(\theta_{\underline{n}}\) , but that is not the case in general. Naturally, none of the \(\theta_n\) names is a token of \(\theta\).[12]

(3)\(⟦ \theta_n ⟧ = ⟦ \theta_{\underline{n}} ⟧ = \theta_{\underline{n}}\)

Here, the unadorned subscript on the left hand side belongs to the language, since we want a canonical name for each token of \(\theta\). The underlined subscripts belong to the metalanguage. The intention is to have \(\theta_n\) be guaranteed to corefer with the n-th token of \(\theta\).

Replacing the invalid (2), we can now write a valid argument that looks a little like an application of Repetition (call it “Repetition* ”):

(4) \(F\theta \therefore F\theta_1\)

Does this proposal solve our problem? One might worry that we are not really getting Repetition back; after all, (4) does not contain the same sentence twice. This worry can be appeased by thinking about unproblematic applications of Repetition:

(5) \(Fa \therefore Fa\)

(5) is valid in part because a is guaranteed to keep its referent constant throughout the argument.[13] Token reflexive expressions provide no such guarantee. Canonical names thus take on the work that is done for free by the name in (5), due to the rule that names get only one interpretation in an argument.

But there is a better objection to the current solution. Suppose one thinks (as I do) that natural languages contain token reflexive expressions, and that we use them in arguments without deploying canonical names, or variables, or such tools.[14] Consider the following arguments:

(6) I am left handed. All those left handed have difficulty with fountain pens. Therefore I have difficulty with fountain pens.

(7) It is dark here. Wherever it is dark, it is cold. Therefore it is cold here.

These arguments could be changed so that the same indexical is not used twice: a different speaker could replace the conclusion of (6) with “he has difficulty with fountain pens”. But that is to offer a different argument, assuming that changing “I” to “he” is sufficient to guarantee that the two arguments are distinct. This is most obvious if we think of arguments as being (partly) individuated by the sentences used, and if we think that this particular difference is sufficient to make the two arguments distinct. This is compatible with thinking that some differences, such as the one between passive and active voice, are not sufficient for distinguishing between arguments. Even if we think of arguments as made up of propositions, rather than sentences, it is possible to make the distinction, for example if one thinks that first-person propositions are different from third-person propositions. It is only on a strict Russellian view, combined with a logic of propositions, that one might think of the change between “I” and “he” as irrelevant here.

Besides, note that if the speaker stays the same throughout an argument, they will use the first person pronoun several times, if needed. It would be better if we could offer a logic that really recovered Repetition.

Like Williamson (1997), one might reply that arguments like (6) and (7), though offered by English speakers, are not the business of logic. Speakers may, for instance, assume or depend on the extra-logical belief that the time, place, and speaker of any part of an argument do not change throughout that argument. This is a risk speakers take, but, according to this objection, one that logic is not interested in. Logic, one might argue, looks at arguments that keep such terms coreferential within an argument, and has nothing to say about cases in which that does not happen.[15]

As Prior (1968) pointed out, this insistence on the purity of logic clashes with the semanticist’s project of describing natural languages, which are used in these risky ways. It does not much matter whether we call the semanticist’s project “logic” or not. But if there is a way to do it, an optimist is bound to keep trying, and to leave the wish for purity aside.

Williamson (1997) is a good example of a pessimist: his argument is that we have hit upon a domain there we cannot do things the way they are usually done, therefore that domain is outside the scope of logic. So here is the challenge: to offer a variant of Repetition that does involve repeating sentences, and one that is recognizably logical.

5. Recovering Repetition, the Second Attempt

I noted above that some arguments in English involve repeating sentences that contain expressions that are arguably token reflexive, like “I” and “here”, and I claimed that this points towards a need to develop a logic that allows something closer to Repetition than Repetition. But note also that English does not contain anything like \(\theta\).[16] Unlike \(\theta\), all candidates for token reflexivity in English, and, as far as I can tell, in natural languages in general, do allow coreference in an argument; they just do not guarantee it. So suppose we focus only on expressions whose tokens may corefer. Can we recover Repetition for those expressions?

A token of a token reflexive expression is assigned a referent by relating that token to that referent in the way prescribed by the character of the expression, which, for our purposes, can be represented by a function. Let \(\tau_{\underline{i}}\) be the i-th token of token-reflexive expression \(\tau\) in some argument, \(f_{\tau}\) be the function which represents the character of \(\tau\), \(P\) the set of parameters (about which more later), and \(p(\tau_{\underline{i}})\) be the parameter in \(P\) relevant to \(\tau_{\underline{i}}\) . Then we can write this:

(8) \(⟦ \tau_{\underline{i}} ⟧^{P} = f_\tau (i,P) = p({\tau}_{\underline{i}})\)

We are now looking at functions that, unlike \(f_\theta\) , allow coreference. The simplest way to ensure the validity of Repetition is to only allow arguments that force this coreference. This is to adapt to token reflexivity a constraint that corresponds to the one Kaplan (1989) placed on arguments, namely that all premises and the conclusion had to be relative to the same context. That forced for instance, “I” to keep its referent constant throughout an argument, since one would only be looking at arguments that allow a single speaker.

This proposal is too blunt. Token reflexivity is interesting because it allows variation in reference. Can we get a version of Repetition, call it “Repetition ”, that does not restrict this variation, but is still a recognizable logical rule?

Consider this candidate instance of Repetition:

(9) I am left handed. \(\therefore\) I am left handed.

What would it take for this argument to be valid? As far as “I” is concerned, what we need is a guarantee that the speaker stay the same throughout.[17] Kaplan(1989) would have us restrict our attention to arguments that provide such a guarantee, and disregard the others. If we do not, he might argue, we allow cases in which it so happens that the speaker stays the same throughout. But “it so happens” is no guarantee, and hence insufficient for validity.

I propose a different solution. First, enrich the notion of argument form, so that it take into account features of the tokened argument that token reflexive expressions are related to by their characters. As a first pass, this allows us, instead of (9), to have two options, (10) and (11), where the first element of each pair is the place where the speaker parameter is specified for the relevant token of “I”:

(10) \(<\)\(☺\), 'I am left handed'. \(>\ \therefore\ <\)\(☺\), I am left handed'.\(>\)

(11) \(<\)\(☺\), 'I am left handed'.\(>\ \therefore\ <\)\(☹\), 'I am left handed'.\(>\)

In (10) the speaker of the first token of “I” is the same as the speaker of the second token. In (11) the speaker of the first token is different from the speaker of the second token. The way I think of them, both (10) and (11) are argument forms, not tokened arguments. The only information they contain about the speaker(s) is whether or not they stay the same throughout the argument. So when Susan uses the sentence "I am left handed" twice in order to use Repetition, the argument she offers is a token of the argument-form (10).[18]

This was just a rough sketch, to prime your intuitions. Let me be a bit more precise. I will begin by distinguishing three notions of validity, depending on what they apply to. Since the topic of the paper is Repetition, I will state them as applied to that rule, but the discussion is meant to generalize. This distinction will later help me address some reasonable objections to this proposal.

First, we have argument-forms. An argument-form is a property of arguments, where arguments are made up of a pair of a premise and a conclusion, each of which is made up of a pair of a parameter set and a sentence. The argument-form specifies the relations between the sentences (identity, in the case of Repetition), and the relations between the parameter sets. An argument-form counts as an application of Repetition, and thus is valid, just in case the sentences are the same, and the parameters relevant to the characters of the token-reflexive expressions in those sentences stay the same, as do all parameters relevant to establishing the circumstance of evaluation, which I will assume here to just be a possible world.[19]

Second, we have formal representations of arguments, which I will call simply “arguments”. These contain a premise and a conclusion, each of which is a pair of a parameter set and a sentence. An argument is valid just in case it is an instance of a valid argument-form.

Finally, we have arguments in the wild. These are sentence tokens, as used by people in an attempt to present an argument. Information about the parameters is not assumed to be a given to the speaker, nor to the addressee. Arguments in the wild are valid just in case they are represented by a valid argument, that is, just in case they are represented by an argument that is an instance of a valid argument-form.

Repetition, then, is not a rule that simply allows sentences to be repeated. Rather, it is a rule that, for example, says that (10) is valid, and (11) is invalid. There is no rule that simply allows the repetition of sentences with token reflexive expressions. Instead, Repetition tells us when sentences can be repeated, in terms of relations between features like the speaker of each token.

We can now give the general formulation of Repetition , as an argument-form (again, abstracting away from anything relevant to validity except the referents of token-reflexive expressions). Suppose that \(S^1_{\underline{1}}\) is a token of sentence \(S^1\) containing token reflexive expressions \(\tau^1, \tau^2, \ldots \tau^n\), with the following tokens:

$$\tau^1_{\underline{1}}, \tau^1_{\underline{2}}, \ldots \tau^1_{\underline{i}}, \tau^2_{\underline{1}}, \tau^2_{\underline{2}}, \ldots \tau^2_{\underline{j}}, \tau^n_{\underline{1}}, \tau^n_{\underline{2}}, \ldots \tau^n_{\underline{k}}$$

Let \(P\) be the set of parameters relevant to the premise:

$$p(\tau^1_{\underline{1}}), p(\tau^1_{\underline{2}}), \ldots p(\tau^1_{\underline{i}}), p(\tau^2_{\underline{1}}), p(\tau^2_{\underline{2}}), \ldots p(\tau^2_{\underline{j}}), \ldots p(\tau^n_{\underline{1}}), p(\tau^n_{\underline{2}}), \ldots p(\tau^n_{\underline{k}}),$$

and \(w_p\) as the world parameter.

Let \(C\) be the corresponding set of parameters for the conclusion (\(c(\tau^1_{\underline{i+1}})\), \(c(\tau^1_{\underline{i+2}})\), etc., and \(w_c\) ). Repetition\(^-\) is the following rule:

(12)\(\langle\)\(P, S^1_{\underline{1}}\)\(\rangle\)\( \therefore \)\(\langle\)\(C, S^2_{\underline{2}}\)\(\rangle\) is an instance of Repetition\(^-\) iff \(S^1=S^2\), and \(\forall x \in [1, i]\ p(\tau^1_{\underline{x}}) = c(\tau^1_{\underline{i+x}})\), and \(\forall x \in [1, j]\ p(\tau^2_{\underline{x}}) = c(\tau^2_{\underline{j+x}})\)\(\ldots\)and \(\forall x \in [1, k]\ p(\tau^n_{\underline{x}}) = c(\tau^n_{\underline{k+x}})\), and \(w_p=w_c\)

In general, validity will be thought of as a relation between more than sentences. The details depend on the language in question, so nothing but a sketch can be given here. Let me assume that we keep our discussion to closed formulas, so we don’t need to worry about assignments to variables. Suppose we abbreviate “sentence token \(S^m_{\underline{n}}\) is true in model \(\mathfrak{M}\) with its token-reflexives’ semantic values fixed appropriately by \(P\)” with \({\vDash^{\mathfrak{M}}}\langle P, S_{\underline{n}} \rangle\). Then we could define validity as follows:

(13) \(\langle P_1, S^1_{\underline{1}} \rangle, \langle P_2, S^2_{\underline{2}} \rangle \ldots \langle P_n, S^n_{\underline{n}} \rangle \vDash \langle C, S^{n+1}_{\underline{n+1}} \rangle\) iff for all models \(\mathfrak{M}\) such that \(\forall i\in [1, n]\), \({ {\vDash^{\mathfrak{M}}}\langle P_i, S^i_{\underline{i}} \rangle }\), it is also the case that \({\vDash^{\mathfrak{M}}}\langle C, S^{n+1}_{\underline{n+1}} \rangle\)

In the remainder of the paper, I argue that Repetition is a worthy replacement of Repetition, by defending it against two objections, and then by comparing it to two other similar logics, proposed for slightly different purposes.

6. Objection 1: Isn’t Repetition about Propositions, rather than Sentences?

Repetition\(^-\) requires a notion of argument that includes more than the logical form of sentences; namely, it includes information about the parameters to which token reflexive expressions are sensitive. But once we include this kind of information, one might worry that we have left the realm of logic as concerning sentences, and gone back to classical conceptions of logic as concerning contents of sentences.

The first part of the answer is that this isn’t quite where we are. At most, Repetition, which is a rule for argument-forms, requires information about the form of a sequence of pairs of a set of parameters and a sentence. The additional information only concerns the question whether certain parameters remain constant throughout an argument or not. In order to supply the relevant form, we do not need information, for example, about who the speaker(s) is (are); we only need to know if they stay constant throughout the argument or not.

Still, Repetition\(^-\) does go beyond sentences, and does contain some information about the referents of expressions. Is that enough reason to dismiss it as a worthy variant of Repetition? Consider again Repetition; it distinguishes between (5), which involves two occurrences of a, and (14), which it counts as invalid:[20]

(14) \(Fa \therefore Fb\)

Suppose our new, extended, notion of argument form contains information about coreference between proper names. Then the worry is that Repetition\(^-\) would fail where Repetition gets things right, in that it might count (14) as valid just in case the two names corefer.

There still is a distinction to be made here. Suppose you hold a strictly Millian view of proper names, according to which their only semantically relevant feature is their bearer. In your formal semantics, you will then have various interpretation functions assign various referents to each name. Intuitively, the idea is that in their natural language uses, proper names are assigned a referent pre-semantically, by a process that semantics is thus blind to. This is the kind of picture defended in Salmon (1986) and Soames (1987); the details differ, but what matters here is just the idea that interpretation functions, semantically speaking, are allowed to vary maximally. By contrast, the classic picture of context sensitivity as developed in Kaplan (1989) attributes to semantics the additional task of describing some aspects of the assignment of a referent to an indexical. Thus, we get the semantic notion of a character, which tells us, intuitively speaking, that “I” refers to the speaker of the context, even though semantics, of course, remains neutral as to who exactly the speaker is at any point in time, place, etc.

Repetition\(^-\) , I would argue, makes use of information that is somewhere in between the two kinds. The kinds of argument forms it needs deserve to be within the scope of semantics because of the semantic nature of token reflexivity. But it also requires more information than Kaplan’s notion of a character did, since that notion allowed untokened expressions to be assigned a referent. Still, I would argue that the considerations that moved Kaplan to leave the assignment of a referent to proper names to pre-semantics can also be applied to our discussion, and thus we can reject a notion of an argument form that would make (some tokens of) (14) valid, and we can do all that while also coherently talking about the argument forms I have proposed.

A related worry is that it is customary to think of validity as knowable a priori, and this is usually thought to exclude information about referents from the domain of logic, since it is not a priori knowable what expressions refer to. This is where our distinction between three notions of argument is useful. First, assessing the validity of argument-forms is trivial. An argument form gives information about the relations between the premise and the conclusion that is needed for this assessment. The validity of an argument is also knowable a priori, since an argument contains the sets of relevant parameters.

What is true is that arguments in the wild are not assessable a priori for validity, and it is also relatedly true that there is no notion of validity for sequences of sentences. Let me address these two issues. Arguments in the wild were never a good candidate for a priori consideration. These are arguments where the speakers take risks, as Prior put it, and these risks are ineliminable. In fact, I take it as a virtue of Repetition\(^-\) that it provides a natural notion of validity for arguments in the wild, even if one whose assessment is not a priori. Whoever offers arguments in the wild, admittedly, has no guarantee that they are thereby providing an instance of a valid argument-form. But we do offer these arguments in the hope that they are valid.

The second issue is that I provide no notion of an argument as a sequence of sentences. There are two ways to make this observation into an objection.[21] The first is about logical custom: since at least Tarski, the tradition is to think of arguments as made up of sentences. But the novelty is not all that new. The notion of validity in Kaplan (1989) concerns sentences relative to a context, not sentences by themselves. This is somewhat obscured by the requirement that the context not change throughout an argument, but it does come out in Kaplan’s claim that some unexpected sentences turn out to be logical truths, such as “I exist”: they are true at all contexts, because all contexts of LD have an (existent) speaker. Repetition\(^-\) does allow more flexibility, since it does not require that all parameters stay fixed throughout an argument. But this is a difference in degree, not in kind from Kaplan’s proposals.

The second way to object to the lack of a notion of validity for sequences of sentences is to claim that, irrespective of tradition, what we judge to be valid are sequences of sentences, so any logic better provide some notion of validity for sentences alone. One might say, for instance, that in the wild we hear sentences, and that is what our judgments depend on. And this certainly fits with how logic classes are taught: contexts or parameters are never presented to students for assessing validity.

The main take-away point of the discussion of sentences (9)–(11) is that arguments as sequences of sentences are not sufficient to capture intuitions of validity for token-reflexive arguments. We teach logic the way we do because the language of first order logic does not contain any such expressions. But English does, and this additional expressive power forces us to reconsider our notion of validity.

7. Objection 2: Why Not Nihilism?

Classical Repetition is a simple rule. \(\theta\), and expressions like it, allow us to consider arguments that look like counterexamples to Repetition. A natural reaction, then, is to bite the bullet, and go for logical nihilism, that is, the view that there are no (universally) valid logical rules. Indeed, this is the view defended in Russell (2017), based on related, though different, considerations. I will first argue that this reaction is not warranted with respect to the token-reflexive expressions I have been discussing, and then I will address Russell’s original counterexample to Repetition.

In some respects, I agree with nihilism. First, as we saw in Section 2, \(\theta\) does provide counterexamples to Repetition, as classically defined. Second, although I claim that Repetition\(^-\) is a fine rule for many token-reflexive expressions, it has nothing useful to say about \(\theta\), whose tokens are guaranteed not to corefer.

But nihilism would have us go too far. French’s proposal shows that giving up Repetition, and, indeed, all object-level rules of logic need not make logic trivial. His move to meta-arguments is, to be sure, a radical departure from usual conceptions of logic. But nihilism is an even more radical departure, perhaps the most radical departure. This radicalism makes it easy for just about any other proposal to claim a considerable advantage over it.

Still, one might withdraw to an object-level nihilism, and thus leave themselves open to taking on French’s talk of meta-arguments.[22] This leaves Repetition. It brings back the possibility of guaranteed coreference, at the price of giving up the reason why the rule was called “Repetition”, namely that it involved repeating a sentence, and nothing else. Furthermore, it requires new logical vocabulary, namely canonical names of token-reflexive expressions.

Yet again, while acknowledging these costs, it still seems to me that nihilism ought to be the last option, and that any other cost pales in comparison.

A better argument for taking the nihilist option is that there are other, independent, arguments for nihilism. This is true of French’s proposal as well, but not of Repetition. If we were already convinced that nihilism is true, token-reflexives would be just another nail in the coffin of logic as we know it. And indeed, Russell (2017: 129) offers a different type of counterexample to Repetition:

Suppose we had a predicate \(prem-white\), whose extension matches the extension of white when the sentence appears in the premises to an argument, but is the null set otherwise. Then the following argument would have a true premise but a false conclusion:

There are some clear differences between \(prem-white\) and \(\theta\). The former changes its referent only as a function of its position in an argument, whereas the latter changes its referent with each tokening. The former is a predicate, and thus not covered by Repetition. The former, but not the latter, allows for counterexamples to predicate logic rules, like universal elimination.

But none of these differences is all that important. Both are token-reflexive, in that their tokens are assigned a referent by relating the token to the referent. And one could easily adapt Repetition to deal with predicates. We would first introduce canonical predicates, which are guaranteed to corefer with regular predicates, just as we did with canonical names above. So along the invalid application of Repetition above, we could have the valid:

(15) Snow is \(prem-white \therefore\) Snow is \(prem-white^*\)

Just like in the case of \(\theta\), Repetition\(^-\) has nothing useful to say about \(prem-white\). But French’s proposals would serve to filter out arguments with \(prem-white\) in them, and Repetition can be adapted to deal with it. Again, we have no reason to go all the way into nihilism. We have two good options, neither of which makes logic trivial.

8. Why Not Some Other Way?

Repetition is one of many possible ways to implement the basic idea of this paper, which is to provide a recognizable cousin of the rule of Repetition for token-reflexive expressions by taking into account not just sentences, but also the parameters that give token-reflexives their referents. My proposal requires thinking of an argument as a sequence not just of sentences, but a sequence of pairs of sets of parameters and sentences. So the familiar notion of logical form needs to be enriched quite dramatically.

One might, however, prefer to stick to thinking of arguments as made up of sentences. This is just the choice made in Zardini (2014) and Georgi (2015). These papers are about context dependence, but the extension to token-reflexive expressions is easily imaginable. The former paper attempts to give a notion of validity for arguments (thought of as sequences of sentences) containing context- dependent expressions, where, unlike in Kaplan (1989), contexts are allowed to vary mid-argument, that is, between premises or between the premises and the conclusion.[23] The latter is about the ability of demonstratives to refer to different things relative to the same context—or, as I would prefer to put it, their ability to have different tokens within a single argument refer to different things.

Zardini (2014: 3485) keeps the notion of an argument as a sequence of sentences (and nothing else) by proposing not one logic, but many, each distinguished from the others by the structure of the sequence of contexts that tells us whether two context dependent expressions corefer or not. So validity is defined for sequences of sentences, but only relative to a particular logic, a logic that is determined by (what I would call) the sequence of sets of parameters. Georgi (2015) also attributes validity to sequences of sentences, by having validity be relative to a context, where that context allows different occurrences of demonstratives to refer to different things. So an argument is made up only of sentences, but it only counts as valid relative to some particular contexts, and invalid relative to others. This is possible because, unlike Zardini, Georgi keeps the idea that the whole argument takes place in the same context; the difference from Kaplan’s LD arises only relative to how occurrences of demonstratives are treated. So in both papers, what counts as valid is a sequence of sentences, but both modify the notion of validity to allow more freedom than the classical, non-relative, notion of validity.

Since both of these papers allow terms to change their referent within an argument, their logics also lack the guarantee that repeating a sentence leads to a valid argument. Though they do not attempt to do so, they could also define something similar to Repetition, and make many of the points I made earlier.

So how should we proceed? Is there any reason to think of arguments in my preferred, more revolutionary way? I begin by addressing some objections that Zardini (2014: 3479) offers against thinking of arguments as made up of anything other than sentences. In the course of the discussion, an argument for my preferred solution will also emerge.

The first objection is that if an argument is made up of sentence/set of parameters pairs, validity would be fragmented too much. For instance, if two tokens of an expression corefer in one argument, and two others in another argument, even if the two arguments are structurally similar, we would miss that similarity if we looked at each argument separately from others. The fact that we could formulate Repetition, which seems to me to unify a certain natural kind of valid argument, shows that Zardini’s worry is unfounded. It is true that the validity of a particular argument may not depend on what happens with other arguments that are structurally similar in some way. But valid rules of inference can still be defined with the right kind of generality.

The second objection is that validity would come out a posteriori and contingent. I discussed the aprioricity worry above. The worry about contingency is best applied to Zardini’s other target in those passages, namely utterance logic. In thinking of pairs of sentences and sets of parameters, talk of contingency and necessity seems out of place. But if one were to want to take token-reflexive logics a step further in the direction of utterance logics, one would indeed need to decide whether utterances necessarily take place in a particular context, and what to say about the modal force of validity given that choice. This is a hard problem, but not one that we need to address here.[24]

On the positive side, our discussion above of the motivation for adding information about parameters to the logical form of arguments offers some reasons to prefer the proposals of this paper to the ones in Zardini (2014) and Georgi (2015). As far as I can tell, there is no difference in the logical laws that come out valid in all of Zardini’s logics and their natural extension to token reflexives.[25] Georgi’s focus is more limited, so it is hard to compare at that level, but the motivation of his paper is very much akin to the current one: accounting for expressions that change their referent mid-sentence. So reasons for or against any of these proposals will be philosophical in nature. I claim that thinking of arguments as including sequences of parameters shows how dramatic of a change we need to make from the customary notion of an argument, and places the notion of aprioricity at the right level: if one lacks knowledge about the relations between the referents of token-reflexive expressions, one cannot judge validity, because that person does not know what argument was offered. In Zardini’s system, one who lacks such knowledge would be characterized as not knowing what logic that argument should be judged in. This seems backward: the logic for these expressions is just one, which does not change depending on the vagaries of token-reflexive expressions. Rather, we need one logic, but many arguments that contain the very same sequence of sentences, some of which arguments may be valid, and some not.

9. Conclusion

We started with one simple rule, Repetition. Apart from nihilism, I discussed three options to deal with problems that token-reflexive expressions pose for Repetition. Each of these options has something to be said for it. Should we not choose one of them?

My answer is that it all depends on our goals. Each has costs and advantages. French’s proposal leaves the object language alone, and is not in danger of collapsing into a logic of propositions. But we lose the notion of valid arguments, which is replaced by the notion of valid meta-arguments. Repetition preserves object language validity, and is a purely sentential (i.e., not propositional) logic. But it requires the introduction of canonical expressions, and departs most from Repetition, since it is not actually about repeating sentences. Repetition is about repeating sentences, it does not require syntactical innovations, and, I claimed, matches our use of token-reflexive expressions. But it requires non-syntactical information, and is only useful for some token-reflexive expressions.

I am interested in the semantics of English. I believe that there are token- reflexive expressions in English, although I did not argue for this claim in this paper. I also believe that we do sometimes just repeat these expressions while arguing with them, and that it would be good for a logic to capture the conditions for doing this safely, even if it is not a logic of propositions. For these purposes, Repetition seems to me to be the best quasi-replacement for Repetition. But for other purposes, the other options might be preferable.

Most importantly, I would hope at least to have shown that token-reflexive expressions create logical problems, but also opportunities for rethinking what is behind familiar logical rules.


I would like to thank two anonymous referees at Ergo, the audience at the 2018 meeting of the Society for Exact Philosophy, Marina Folescu, David Kaplan, Eliot Michaelson, Matt McGrath, Bryan Pickel, Brian Rabern, and especially David Braun. They all attempted to improve this paper or the ideas therein, and hopefully succeeded.


  • Bach, Kent (2007). Reflections on Reference and Reflexivity. In Michael O’Rourke and Corey Washington (Eds.), Situating Semantics: Essays on the Philosophy of John Perry (395–426). MIT Press.
  • Braun, David (1995). What Is Character? Journal of Philosophical Logic, 24(3), 227– 240. https://doi.org/10.1007/BF01344202
  • Braun, David (1996). Demonstratives and Their Linguistic Meanings. Noûs, 30(2), 145–173. https://doi.org/10.2307/2216291
  • Correia, Fabrice (2014). Logical Grounds. The Review of Symbolic Logic, 7(1), 31–59.
  • Crimmins, Mark (1992). Context in the Attitudes. Linguistics and Philosophy, 15(2), 185–198. https://doi.org/10.1017/S1755020313000300
  • Crimmins, Mark (1995). Contextuality, Reflexivity, Iteration, Logic. Philosophical Perspectives, 9, 381–399. https://doi.org/10.2307/2214227
  • French, Rohan (2016). Structural Reflexivity and the Paradoxes of Self-Reference. Ergo, 3(5), 113–131. https://doi.org/10.3998/ergo.12405314.0003.005
  • García-Carpintero, Manuel (1998). Indexicals as Token-Reflexives. Mind,107(427), 529–563. https://doi.org/10.1093/mind/107.427.529
  • Georgi, Geoff (2011). Demonstratives in Logic and Natural Language (Unpublished doctoral dissertation). University of Southern California
  • Georgi, Geoff (2015). Logic for Languages Containing Referentially Promiscuous Expressions. Journal of Philosophical Logic, 44(4), 429–451. https://doi.org/10.1007/s10992-014-9335-5
  • Kalish, Donald, Richard Montague, and Gary Mar (1980). Logic: Techniques of Formal Reasoning (2nd ed.). Oxford University Press.
  • Kaplan, David (1989). Demonstratives. In Joseph Almog, John Perry, and Howard Wettstein (Eds.), Themes from Kaplan (481–563). Oxford University Press.
  • Kaplan, David (1999). Reichenbach’s Elements of Symbolic Logic. German translation in Andreas Kamlah and Maria Reichenbach (Eds.), Hans Reichenbach: Gesammelte Werke (Vol. 6). Vieweg.
  • Kripke, Saul (2008). Frege’s Theory of Sense and Reference: Some Exegetical Notes. Theoria, 74(3), 181–218. https://doi.org/10.1111/j.1755-2567.2008.00018.x
  • Künne, Wolfgang (1992). Hybrid Proper Names. Mind, 101(404), 721–731.
  • Künne, Wolfgang (2010). Sense, Reference and Hybridity. Dialectica, 64(4), 529– 551. https://doi.org/10.1093/mind/101.404.721
  • Montague, Richard (1970). Universal Grammar. Theoria, 36(3), 373–398.
  • Perry, John (2001). Reference and Reflexivity. CSLI.
  • Prior, A. N. (1968). Fugitive Truth. Analysis, 29(1), 5–8. https://doi.org/10.1093/analys/29.1.5
  • Radulescu, Alexandru (2015). The Logic of Indexicals. Synthese, 192(6), 1839–1860.
  • Rami, Dolf (2014). The Use-Conditional Indexical Conception of Proper Names. Philosophical Studies, 168(1), 119–150. https://doi.org/10.1007/s11229-015-0659-7
  • Reichenbach, Hans (1947). Elements of Symbolic Logic. Macmillan.
  • Richard, Mark (1993). Attitudes in Context. Linguistics and Philosophy, 16(2), 123– 148. https://doi.org/10.1007/BF00985177
  • Russell, Gillian (2017). An Introduction to Logical Nihilism. In Hannes Leitgeb, Ilkka Niiniluoto, Palvi Seppala, and Elliott Sober (Eds.), Logic, Methodology and Philosophy of Science – Proceedings of the 15th International Congress (125–135). College Publications.
  • Salmon, Nathan (1986). Frege’s Puzzle. MIT Press.
  • Schoubye, Anders (2017). Type-Ambiguous Names. Mind, 126(503), 715–767.
  • Soames, Scott (1987). Direct Reference, Propositional Attitudes, and Semantic Content. Philosophical Topics, 15(1), 47–87. https://doi.org/10.5840/philtopics198715112
  • Strawson, Peter F. (1950). On Referring. Mind, 59(235), 320–344. https://doi.org/10.1093/mind/LIX.235.320
  • Textor, Mark (2007). Frege’s Theory of Hybrid Proper Names Developed and Defended. Mind, 116(464), 947–981. https://doi.org/10.1093/mind/fzm947
  • Williamson, Timothy (1997). Sense, Validity and Context. Philosophical and Phenomenological Research, 57(3), 649–654. https://doi.org/10.2307/2953758
  • Zardini, Elia (2014). Context and Consequence. An Intercontextual Substructural Logic. Synthese, 191(15), 3473–3500. https://doi.org/10.1007/s11229-014-0490-6


    1. “Repetition” is the name used in Kalish, Montague, and Mar (1980); the rule also goes by “Reflexivity” and “Identity”.return to text

    2. Montague (1970: 374, Footnote 2).return to text

    3. Russell (2017) offers related, but different arguments for such a nihilist view. They rely on expressions changing their interpretation depending on being before or after the semantic entailment sign. We will come back to nihilism in Section 7. I should also note that most of this paper will be focused on Repetition, and more broadly on sentential logic, and thus that issues arising specifically from quantifier logic will not be considered.return to text

    4. Some readers may balk at the idea that natural languages contain token-reflexive expressions, or that if there are any, they belong within the purview of semantics. This paper may still be interesting for those readers, for two reasons. First, token-reflexive expressions certainly can be introduced into a language, and they would give rise to some of the issues I consider. Second, even if there are good reasons to think that there are no token-reflexive expressions in natural languages, this paper, I hope, shows that logical issues are not among those reasons.return to text

    5. See Perry (2001) for a defense of this thesis, and Bach (2007) for more discussion. I should also note the distinction between token-reflexivity and utterance-reflexivity. One token can be used in different utterances, and hence the two notions are different. All issues I discuss about token-reflexivity are at least as pressing for utterance-reflexive ones, especially if one assumes that there can be no utterance without tokens (a non-trivial assumption, to be sure).return to text

    6. “Ineliminably” because I want to exclude rules like the following: a token of X refers to the number that has the following property: being such that it is identical to 0 and that token is self-identical. This rule does, in some sense, relate the token to its referent, but it really is a token reflexive rule only in the weak sense, not in the sense I intend. My notion of a rule here is in the neighborhood of Braun’s (1995) proposal that one think of characters are relations, rather than functions, thus allowing their identity conditions to be richer than those of merely extensional functions.return to text

    7. \(\theta\) is, as far as I know, first discussed in Kaplan (1999). It is a simplified version of \(\Theta^*\) as introduced in Reichenbach (1947: 287). Expressed as a character rule, the meaning of Reichenbach’s \(\Theta^*\) is the rule that says that each token refers to the largest expression token of which it is a part (e.g., a sentence).return to text

    8. A version of this argument that Repetition fails for token-reflexive expressions is also given in Kaplan (1999).return to text

    9. . French’s proposal is given in proof theoretic terms, so I am adapting it to my purposes.return to text

    10. Note that French’s proposal could also be adapted to address the nihilist arguments in Russell (2017), since it could also serve to filter out expressions that change their interpretation depending on whether they occur in a premise or in a conclusion.return to text

    11. Kaplan (1999) sketches a variable-lite account, which should work, so long as it is clear what is “bound” by what. Kaplan’s proposal is to think of \(\theta\) as ambiguous, so that the disambiguation provides indexes which settle the binding issue. As we will see, that is not necessary: the form of the argument may just as well depend on having several occurrences of \(\theta\) in the premises, proposing an ordering of these occurrences, and then keeping track of the first one, the second one, etc.return to text

    12. The idea here is that we add n such names to the language every time we construct an argument with n tokens of \(\theta\). So we need the notion not just of a well formed formula, but also of a well formed argument, unless we want to allow empty canonical names. Alternatively, we could assume that there is an absolute ordering of tokens of \(\theta\), and let there be as many such names as there are tokens. This way, our language would not need to be enriched every time we construct an argument.return to text

    13. This remark is only useful if names are, as tradition would have it, not token reflexive, nor context dependent in some other way. See Rami (2014), Schoubye (2017) for the latter kind of views.return to text

    14. Braun (1996) and García-Carpintero (1998) offer this argument against a rule similar to Repetition, as proposed in Kaplan (1989).return to text

    15. Williamson allows that this argument will fail for demonstratives like “this” and “he”, since their role would be severely constrained if they were not allowed to change referent within an argument. For such expressions, Williamson offers subscripts in the style of Kaplan (1989), a solution that will not work for \(\theta\), since there is no question of ambiguity there. I see no reason against applying the same reasoning to “today” and “I”: if it is bad to constrain the referential freedom of “he” and “that” by indexing them, effectively making “he” and “that” massively ambiguous, then it seems to me equally bad to have a logic that constrains “today” by forcing all arguments to be instantaneous.return to text

    16. Perhaps “this token”, in some uses, is a close approximation of \(\theta\). One option would be to restrict the focus of this third proposal to expressions that are less obstinate than that. But maybe one does not need to go that way: “this token” strikes me as philosophers’ English, rather than the Queen’s English, and I’m aiming for the latter here.return to text

    17. I am focusing here on “I”. Tenses are, arguably, also token-reflexive expressions, so we would need a similar discussion for them.return to text

    18. Frege often said that in order to express some thoughts, such as first person ones, the sentences themselves must be supplemented by things in the world, such as the speaker, the time of the utterance, the place, etc. So those thoughts are expressed by a hybrid made up of sentences and non-linguistic objects. See, e.g., Kripke (2008), Künne (1992; 2010), Textor (2007). Furthermore, he also famously claimed that the same thought can be expressed on different days if one changes “today” to “yesterday”, and the tenses as needed. The latter feature was used as a partial motivation in Zardini (2014) and Radulescu (2015) when proposing logics that allow contexts to change mid-argument. Exactly how to relate validity to this Fregean view, which has been historically very unpopular, though lately perhaps less so, remains a question for a later time.return to text

    19. I should note that something similar to my notion of argument-forms was briefly entertained in a discussion between Mark Crimmins and Mark Richard; see Crimmins (1992), Richard (1993), and especially Crimmins (1995).return to text

    20. Recall that we are assuming that proper names are not token reflexive. If they are, similar arguments can be given for any expression that is deemed non-reflexive.return to text

    21. I thank an anonymous reviewer for making this distinction.return to text

    22. Russell (2017), as the title suggests, is not meant to be a thorough discussion of the issue. In particular, it does not discuss proposals like French’s, so I am only imagining possible responses here.return to text

    23. This was also the motivation of my earlier Radulescu (2015); I make some brief remarks there about Zardini (2014) and an earlier version of Georgi (2015), namely Georgi (2011). These three papers came out roughly at the same time, and, as far as I know, were written in ignorance of each other. I am grateful to an anonymous reviewer for drawing my attention to the fact that I needed to relate the current proposals to those made in these papers.return to text

    24. Zardini (2014: 3490, Footnote 32) offers a third objection, which depends on the assumption that the difference occurences of “here” would have to be assigned the same referent in a context. This is true of Kaplan’s LD, but is not an essential feature of logics of context dependent expressions. In any case, the assumption is clearly false of token-reflexive expressions.return to text

    25. A more extensive comparison, along with a general definition of validity, is in the works.return to text