# A Reasonable Little Question: A Formulation of the Fine-Tuning Argument

Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact mpub-help@umich.edu to use this work in a way not covered by the license. :

For more information, read Michigan Publishing's access and usage policy.

## Abstract

A new formulation of the Fine-Tuning Argument (FTA) for the existence of God is offered, which avoids a number of commonly raised objections. I argue that we can and should focus on the fundamental constants and initial conditions of the universe, and show how physics itself provides the probabilities that are needed by the argument. I explain how this formulation avoids a number of common objections, specifically the possibility of deeper physical laws, the multiverse, normalisability, whether God would fine-tune at all, whether the universe is too fine-tuned, and whether the likelihood of God creating a life-permitting universe is inscrutable.

## 1. Introduction

Is the physical world all that exists? Are the ultimate laws of the physical universe the end of all explanations, or is there something about our universe that is noteworthy or rare or clever or unexpected?

The Fine-Tuning Argument (FTA) for the existence of God puts forward just such a fact. The claim is that the existence of a universe that supports the complexity required by physical life forms is remarkable. To be sure, it is a familiar fact—after all, we exist. But new information has seemingly made this familiar fact into an astounding one: in the set of fundamental parameters (constants and initial conditions) of nature, such as the cosmological constant and the strength of electromagnetism, an extraordinarily small subset would have resulted in a universe able to support the complexity required by life. This is known as the fine-tuning of the universe for life.

The FTA claims that, given the fine-tuning of the universe, the existence of a life-permitting universe is very unexpected given naturalism — that “there is only one world, the natural world . . . [which] evolves according to unbroken patterns, the laws of nature” (Carroll 2016: 20)—but not particularly unexpected given theism—that God exists. It thus provides evidence for the existence of God. It is worth remembering, before we consider the range of philosophical formulations and responses, that the argument has considerable intuitive force. Faced with his own fine-tuning discoveries in physics and astronomy, Fred Hoyle commented that, “a common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as with chemistry and biology, and that there are no blind forces worth speaking about in nature” (Hoyle 1982).

Philosophers of religion have formulated the FTA in a number of ways. Swinburne (2004) presents the FTA as a “C-inductive” argument, that is, an argument that adds to the probability of the conclusion. Swinburne argues that the probability that human bodies exist, given that the universe conforms to natural laws, is very low if theism is false, and not very low if theism is true. In a similar vein, Collins (2009) argues that given the fine-tuning evidence, a life-permitting universe is “very, very epistemically unlikely” under the hypothesis that there is a single naturalistic universe, but not unlikely under theism. Given that theism “was advocated prior to the fine-tuning evidence (and has independent motivation)”, it follows that the existence of a life-permitting universe strongly supports theism over the naturalistic single universe hypothesis. (I will discuss multiple naturalistic universes in a later section). Roberts (2011), motivated by the problem of old evidence, formulates the argument with the existence of a life-permitting universe as background evidence and the fine-tuning of the universe for life as new evidence on which we update our credences. Craig (2003) presents the FTA as a syllogism: “(1) The fine-tuning of the initial state of the Universe is due to either physical necessity, chance, or design. (2) It is not due to physical necessity or chance. (3) Therefore, it is due to design.” Physical necessity is discounted because of the existence of alternative mathematical laws of nature and the non-uniqueness of initial conditions in all physical laws. Chance is discounted because the probabilities involved are extremely small, and the universe conforms to an independent pattern (it is life permitting).

Unsurprisingly, a wide range of objections have been raised against the FTA. We will discuss the following objections in later sections.

- Deeper Laws: the constants and initial conditions simply reflect the unfinished state of current physics. Physics will progress until we find, in the words of Einstein, “such strongly determined laws that within these laws only rationally completely determined constants occur (not constants, therefore, whose numerical value could be changed without destroying the theory)” (quoted in Schilpp 1969: 63).
- Multiverse: it is physically possible (or, more strongly, reasonable extensions to current physical theories make it plausible) that the constants and initial conditions that characterise our observable universe vary through time and space, effectively creating a vast number of variegated universe domains. Within this multiverse, the right conditions for life are likely to turn up somewhere, and of course, life will only exist to ask questions in the very rare life-permitting universe domains.
- Normalisability: McGrew, McGrew, and Vestrup (2001) and Colyvan, Garfield, and Priest (2005) have argued that the probability measure that the FTA attempts to apply to the constants and initial conditions of nature, because it is spread evenly over an infinitely large range, cannot be normalised and hence the relevant probabilities cannot be calculated. We cannot conclude that a life-permitting universe is improbable on naturalism.
- God wouldn’t Fine-Tune: a number of authors have questioned why God would create a universe that needed to be fine-tuned in the first place. Why create this hurdle, only to then overcome it? Why not simply make a universe in which life is a likely outcome regardless of the fundamental parameters? Why doesn’t God use his power to make life in any universe?
- Too Fine-Tuned: there are characteristics of our universe that are too fine-tuned, in the sense that our universe is significantly less likely than life requires. This amounts to a failed prediction for theism, or an opportunity for another ad hoc theist dodge.
- Inscrutable God: Manson has defended the objection that the probability of a life-permitting universe on theism is inscrutable, that we cannot make an estimate of it. The sceptic of fine-tuning reasons that “God’s mind is so different from ours that we cannot judge what God would be likely to create, or even whether God would be likely to create at all” (2018: 4).

In this paper, I present a formulation of the FTA that answers these objections in a new way. Section 2 will review Bayesian probability theory, on which my formulation will be based. Section 3 will present and defend the premises of the argument. Section 4 will reply to objections.

## 2. Testing Theories, the Bayesian Way

There are some subtle differences between the way physicists and philosophers use the Bayesian approach to probability theory that are important to our presentation of the FTA. I have discussed the Bayesian approach to reasoning in the physical sciences elsewhere (Barnes 2017; 2018), and will summarise the important points here.

Bayesian probabilities \(p(A|B)\) are developed (for example, by Jaynes 2003) as an extension to classical logic, quantifying the degree of plausibility of the proposition \(A\) given the truth of the proposition \(B\), or following Climenhaga (2019), we can speak of the degree of support that \(B\) gives to \(A\). Why think that degrees of plausibility or support can be mathematically modelled by probabilities? There are a number of approaches that lead Bayesians to the probability axioms of Kolmogorov (1933) or similar, such as Dutch book arguments and representational theorems that trace back to Ramsey (1926). Better known among physicists is the theorem of Cox (1946; see also Caticha 2009; Jaynes 2003; Knuth & Skilling 2012), which show that the standard probability rules follow from some simple desiderata, such as “if a conclusion can be reasoned out in more than one way, then every possible way must lead to the same result.”

These give the usual product, sum and negation rules for each of the Boolean operations ‘and’ (\(AB\)), ‘or’ (\(A+B\) and ‘not’ (\(\bar{A}\))[1] . From the product rule, we can derive Bayes’ theorem (assuming \(p(B | C) \neq 0\)),

$$p(A|BC) = \frac{p(B|AC) ~ p(A | C)}{p(B | C)} ~.$$ | (1) |

These are identities, holding for any propositions \(A\), \(B\) and \(C\) for which the relevant quantities are defined. In philosophical presentations, Bayes theorem often comes attached to a narrative about prior beliefs, updating and conditioning; none of this is essential. Assigning known propositions to \(B\) and \(C\) in Equation (1) is purely for convenience. This is worth stressing: there is nothing in the foundations of Bayesian probability theory, or in its actual use in the physical sciences, that mandates that we apply Bayes theorem in chronological order, that is, that in applying Equation (1), we must have learned \(B\) after we learned \(C\). This chronological mandate is often imposed by philosophers; I have never seen it imposed by scientists or statisticians.[2]

When Bayes theorem is used to calculate the probability of some hypothesis or theory \(T\), given evidence \(E\) and background information \(B\), the corresponding terms in Equation (1) are labelled as follows: \(p(T|EB)\) is the posterior probability, \(p(T|B)\) is the prior probability, \(p(E|TB)\) is the likelihood, and \(p(E|B)\) is the marginal likelihood. But remember: these are mere labels.

Our argument will focus on likelihoods. We can write Bayes theorem in the following form,

$$p(T|EB) = \frac{p(E|TB) ~p(T|B)}{p(E|TB) ~p(T|B) + p(E|\bar{T}B) ~p(\bar{T}|B)} ~.$$ | (2) |

Note two important points. Firstly, all theory testing is theory comparison. In Equation (2), we must evaluate the term \(p(E|\bar{T}B)\) which is the likelihood of the evidence given that the theory \(T\) is not true. We must compare the expectations of our theory of interest with the expectations of rival theories, considered together as \(\bar{T}\).

Secondly, theories are rewarded according to how likely they make evidence. Likelihoods are normalised with respect to evidence: \(p(E|TB) + p(\bar{E}|TB) = 1\). A theory is given one unit of probability to spend among the possible sets of evidence, and must choose wisely where to place its bets. A prodigal theory that squanders its likelihood on evidence that isn’t observed—by spreading it thinly over a wide range of incompatible outcomes, for example—is punished relative to more discerning theories. Such wasteful theories include what are known in probability theory as non-informative theories: a theory is non-informative with respect to a set of outcomes/statements if it gives us no reason to expect any particular member of the set. For finite sets, non-informative theories can justify the use of the principle of indifference, whereby we assign equal probabilities to each member of the set. For infinite sets, non-informative distributions have been derived for specific cases, and include flat distributions, the Jeffrey’s distribution, maximum entropy, and more (Kass & Wasserman 1996). It is an open question whether there are general principles that govern all non-informative distributions. Note well that non-informative theories do not automatically have low posterior probabilities. For example, the theory “Alice shuffled the deck of cards thoroughly” is non-informative with regard to the order of the deck, but is not thereby necessarily implausible.

## 3. Formulation of the FTA

In Lewis and Barnes (2016: 344), I presented a popular-level version of the FTA as follows:

- Naturalism is non-informative with respect to the ultimate laws of nature.
- Theism prefers ultimate laws of nature that permit the existence of moral agents, such as intelligent life forms.
- The laws and constants of nature as we know them are fine-tuned—vanishingly few will produce intelligent life.
- Thus, the probability of this (kind of) universe is much greater on theism than naturalism.

As noted in Lewis and Barnes (2016), a problem with this argument is that we don’t know the ultimate laws of nature, as referred to by the first two premises. Why should we care that the laws as we know them (third premise) are fine-tuned?

This paper aims to answer this question. Drawing on the Bayesian framework for testing theories, I formulate the FTA along the same lines as Swinburne (2004) and Collins (2009).

[1] For two theories \(T_1\) and \(T_2\), in the context of background information \(B\), if it is true of evidence \(E\) that \(p(E|T_1B) \gg p(E|T_2B)\), then \(E\) strongly favours \(T_1\) over \(T_2\).

[2] The likelihood that a life-permitting universe exists on naturalism is vanishingly small.

[3] The likelihood that a life-permitting universe exists on theism is not vanishingly small.

[4] Thus, the existence of a life-permitting universe strongly favours theism over naturalism.

The key point of this paper is the calculation in support of Premise [2].

[5] To evaluate the likelihood that a life-permitting universe exists on naturalism (and on theism), we should restrict our focus to the subset of possible universes generated by varying the fundamental constants of nature.

[6] Given our restricted focus, naturalism is non-informative with respect to the fundamental constants.

[7] Physicists routinely assign non-informative probability distributions to fundamental constants, which we can use to calculate the likelihood that a life-permitting universe exists on naturalism.

[8] Using these distributions, the likelihood that a life-permitting universe exists on naturalism is vanishingly small (which establishes Premise [2]).

We will now consider each of the key premises.

### Premise [1]

For two theories \(T_1\) and \(T_2\), in the context of background information \(B\), if it is true of evidence \(E\) that \(p(E|T_1B) \gg p(E|T_2B)\), then \(E\) strongly favours \(T_1\) over \(T_2\). If we subscribe to the Bayesian’s arguments that probabilities model degrees of plausibility, then whether \(E\) supports some theory over another is modelled by its effect on the posterior probabilities of those theories. This premise then follows from Bayes theorem.

This principle is widely used and intuitive. When thinking through a hypothesis, we often ask: if this idea were true, what would I expect to be the case? In the Bayesian framework, the likelihood can be thought of as quantifying degrees of expectation. It is important to stress that Premise [1] is a sufficient condition. I do not claim that a Bayesian approach can solve every problem in epistemology, or that a likelihood can be calculated for every enquiry one might make about the world, or that for any two propositions \(A\) and \(B\), the Bayesian probability \(p(A|B)\) exists or is knowable. Rather, the claim is that, if the relevant Bayesian probabilities can be calculated, even in an approximate way, then we should use them. Given the solid foundations of the Bayesian approach, anchored by the theorems of Cox et al., and given the widespread usage and excellent track record of this approach in the physical sciences, these probabilities should not be discarded. Their effect on our beliefs should be carefully considered.

### Premise [5]

To evaluate the likelihood that a life-permitting universe exists on naturalism (and on theism), we should restrict our focus to the subset of possible universes generated by varying the fundamental constants of nature. Our focus here will be on the subset of universes; we will discuss assigning probabilities to that subset later.

In thinking about the success or failure of naturalism to account for our universe, it is fitting that we consider our expectations on the level of the ultimate laws of nature. Naturalism affirms that “there is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops” (Carroll 2007). In light of the central importance of ultimate laws, we are led to the following question.

The Big Question: of all the possible ways that a physical universe could have been, is our universe what we would expect on naturalism?

However, the Big Question is too big. Specifying an example of a possible ultimate law of nature would involve (at least, if we follow the example of modern physics) specifying a certain mathematical structure, such as a model of spacetime and some quantum fields in a Hilbert space. Thinking about all the possible ways that a physical universe could have been would involve thinking about (at least) every possible mathematical law of nature, even ones that use mathematical equations and structures not yet invented/discovered. We would need to distribute probabilities not only over variables, but over equations and structures. And we would need to be able to calculate the consequences of all these equations, to know what their associated universe would be like. These tasks are practically impossible.

And yet, the naturalist should not abandon the Big Question entirely. If we have no idea at all what kind of physical universe we would expect on naturalism, then a number of arguments for naturalism and against the existence of God fail. Take four examples: (i) if we are an accidental product of blind nature, then we might expect to exist in a boring, typical, insignificant part of the universe— and here we are, on the third rock from an average star; (ii) naturalism expects enjoyable and harmful things to happen around us, with no rhyme or reason—and life often involves profound and unfair suffering; (iii) naturalism expects natural forces to run the universe with no exceptions—and miracle claims are rare and dubious; and (iv) naturalism expects the success of science.

However, without expectations these arguments fail. Recall that all theory testing is theory comparison. The naturalist may produce a convincing argument from evil and suffering (ii) that shows that this aspect of our universe is highly unexpected on theism. But, unless there is some way to get a handle on whether evil (or the appearance of evil, or suffering) is expected on naturalism, this argument will not affect the posterior probability of naturalism or theism. Similarly, concerning (iv) the naturalist would not be able to say whether we would expect a naturalistic universe to be orderly enough to do science. There are possible naturalistic universes in which science—the human attempt to discover the laws of nature, or summarise the phenomena of nature in a way that is “meaty and pithy and helpful and informative and short” (Albert 2015: 23)—is a failure. Natural events would have natural causes, but they would not be able to be systematised into laws. If we have no idea what kind of natural world to expect on naturalism, then no fact about the natural world can be evidence for naturalism.

I suggest a way forward: we find a smaller, answerable question that reflects the Big Question. This follows in the noble physics tradition of the spherical cow, wherein a difficult (usually pencil-and-paper unsolvable) problem is substituted by a simpler problem that resembles the original. So, physicists solve problems about infinitely large capacitors made of frictionless slopes in a vacuum.

If the set of all the possible ways that a naturalistic physical universe could have been is too big to handle, then we should look for a subset of that set. Looking at the current laws of physics, a promising candidate emerges. The standard model of particle physics and the standard model of cosmology (together, the standard models) contain 31 fundamental constants (which, for our purposes here, will include what are better known as initial conditions or boundary conditions) listed in Tegmark, Aguirre, Rees, and Wilczek (2006):

- 2 constants for the Higgs field: the vacuum expectation value (vev) and the Higgs mass,
- 12 fundamental particle masses, relative to the Higgs vev (i.e., the Yukawa couplings): 6 quarks (u,d,s,c,t,b) and 6 leptons (\(e\),\(\mu\), \(\tau\), \(\nu_e\), \(\nu_\mu\), \(\nu_\tau\))
- 3 force coupling constants for the electromagnetic (\(\alpha\)), weak (\(\alpha_w\)) and strong (\(\alpha_s\)) forces,
- 4 parameters that determine the Cabibbo-Kobayashi-Maskawa matrix, which describes the mixing of quark flavours by the weak force,
- 4 parameters of the Pontecorvo-Maki-Nakagawa-Sakata matrix, which describe neutrino mixing,
- 1 effective cosmological constant (\(\Lambda\)),
- 3 baryon (i.e., ordinary matter) / dark matter / neutrino mass per photon ratios,
- 1 scalar fluctuation amplitude (\(Q\)),
- 1 dimensionless spatial curvature (\(\kappa \lesssim 10^{-60}\)).

This does not include 4 constants that are used to set a system of units of mass, time, distance and temperature: Newton’s gravitational constant (\(G\)), the speed of light \(c\), Planck’s constant \(\hbar\), and Boltzmann’s constant \(k_B\). There are 25 constants from particle physics, and 6 from cosmology.[3]

These parameters appear in the equations of fundamental physics or in the solutions to those equations. While we know their values, often with exquisite accuracy, they cannot be calculated by those equations. Their status was summarised by Richard Feynman, writing about the strength of electromagnetism (\(\alpha\)),

All good theoretical physicists put this number up on their wall and worry about it. Immediately you would like to know where this number for a coupling comes from \(\ldots\) Nobody knows. It’s one of the greatest mysteries of physics: a magic number that comes to us with no understanding.\(\ldots\) We know what kind of a dance to do experimentally to measure this number very accurately, but we don’t know what kind of dance to do on the computer to make this number come out, without putting it in secretly! (Feynman 1985: 129)

A physical quantity’s status as a “fundamental constant” is conferred by the equations in which it appears, and specifically by the fact that those equations constitute the best theory we have of the basic constituents and boundary conditions of the universe.

These parameters allow us to create a subset of possible physical universes, and so to generate a more feasible enquiry.

The Little Question: of all the possible ways that the fundamental constants of the standard models could have been, is our universe what we would expect on naturalism?

The Little Question has much to commend itself to our attention.

- Physicists have been exploring the consequences of varying the fundamental constants of nature for decades, but not for the purposes of testing naturalism or promoting theism. Exploring parameter space is required to appraise any physical model, because we want to know which values of the parameters are most likely given our data (posterior parameter distribution), and whether the model explains the data for a wide or narrow range of the parameters. For example, in order to infer the value of the ordinary-to-dark matter ratio of our universe, cosmologists calculate the properties of the cosmic microwave background for a range of possible parameter values. The predictions that match—and, just as importantly, the ones that don’t—tell us which values are most likely. It is as a result of these enquiries that physicists discovered the fine-tuning of the universe for life.
- With some degree of confidence, we can calculate what the universe would be like with different fundamental constants. The job of theoretical physics is to explore the consequences of mathematical models, to connect equations of physics to physical scenarios. For example, we solve Newton’s equations to show that the planets follow (approximately) elliptical orbits. By varying the fundamental constants, we are keeping the same dynamical equations as our universe (Friedmann-Lemaître-Robertson-Walker metric and General Relativity, the Standard Model Lagrangian and quantum field theory). These are the equations that describe our universe, and so these are the equations that have received the most attention from theoretical physicists.
- It is systematic. There is a well-defined set of fundamental constants that we can investigate as a multi-dimensional parameter space. We aren’t just checking every possibility that we can think of, or that mathematicians have thus-far invented/discovered.
- The Little Question reflects the best physics we have, rather than indefinitely (probably infinitely) postponing the Big Question until physics is finished. As above, if the naturalist refuses to generate expectations of physical universes until science is finished and the ultimate laws of our universe are known, then no fact of science or of our experience can be evidence for naturalism. In asking the Little Question, we are staying as close as possible to the best physical theories that we have. Since we don’t know how to systematically consider variations of the forms of the equations themselves, the best available option is to vary the only other degree of freedom that presents itself: the free parameters in the equations, that is, the fundamental constants. Thus, the Little Question is plausibly the best available approximation to the Big Question.

A worry remains: why think that the Little Question reflects the Big Question? Why should we think that the answer to the Little Question is likely to be the same as the answer to the Big Question? We can call this the Lamppost Worry:[4] on an otherwise dark street, if the Big Problem of looking everywhere for my lost keys is intractable, we may be tempted to focus on the more tractable Little Problem of looking for my lost keys under the only lamppost. While this is more reasonable than doing nothing at all, and it may be our best hope, we are not justified in assuming that we will be successful.

More generally, when are we justified in treating a sample of a population as being representative of the whole? Reasoning from samples to populations is commonplace in science, so there should not be an in-principle objection to our kind of argument. The crucial factor is how the sample was chosen. Ideally, we would like our available sample (universes with different constants) to be randomly drawn from the population of interest (all possible physical universes); this is unfortunately not the case. Instead, we have considered possible universes that are closely related to our universe, specifically, they have the same laws. This introduces a bias to our sample. Crucially, this bias works in the naturalist’s favour. Like searching for bears starting at a place where bears were recently sighted, we are looking at other universes starting near our universe. If anything, our sample should contain more life-permitting universes than a random sample.

We can reverse the question: under what circumstances would the answer to the Big Question be different to the Little Question? If Premise [8] is correct, then within our sample/subset, life-permitting universes are a small oasis in a large desert of dead universes. However, the desert itself is a mere subset of some much larger and largely unexplored landscape. For Premise [5] to be false, that is, for our restricted varying-constants subset to be misleading regarding the Big Question, there must be a vast, as yet undiscovered Eden of life that covers a significant part of the remainder of the landscape. It must be the case that beyond the borders of our ignorance lies an expansive collection of possible universes with laws that are robustly and generically life-friendly, that is, they are either life-friendly and parameter free, or life-friendly for the majority of values of any free parameters that they contain.

The reason for our restricted focus in this argument is that assigning probabilities and investigating life-permitting outcomes over the entire landscape is too difficult. However, nothing prevents us in principle from discovering pieces of this greater Eden, that is, specific examples of life-friendly laws. We can just write down some equations and derive the consequences; this is what theoretical physicists do for a living. And yet, we have exactly zero examples of life-friendly laws. Our limited and unsystematic forays into the landscape of other laws of physics have discovered either life-prohibiting or fine-tuned laws. Investigations of the fine-tuning of the universe have considered the consequences of changing the laws of nature; these are summarised in Barnes (2012: §4.1.3) and Lewis and Barnes (2016: ch. 6). For example, if gravity were repulsive, matter wouldn’t clump into complex structures. In a universe of Newtonian gravitating masses (with no other forces), unless the initial conditions are exquisitely fine-tuned, collections of particles either exhibit boring periodic motion or unstable chaotic motion, but not the kind of complexity of arrangement required by life. In a universe that obeyed the classical laws of electromagnetism (no quantum mechanics), stable arrangements of charged particles (such as atoms) are impossible. Even two-dimensional cellular automata have rules that need to be fine-tuned for interesting patterns to result. Nowhere in the corpus of theoretical physics has anyone found a law that would permit life with no fine-tuning of its parameters or mathematical form. And yet, for Premise [5] to be false, these kinds of laws must dominate the landscape of possibilities.[5] In terms of the Lamppost Worry, we are not confined to the glow of a single lamppost. We cannot evenly illuminate the whole street, but wherever the light of theoretical physics has shone (by proposing alternative laws of nature), we see a picture that is consistent with fine-tuning.

Ours is a probabilistic argument, not a logical demonstration; we can identify the circumstances under which Premise [5] is false. Thinking of the Lamppost worry, we can imagine the sun rising on our street, and revealing that the area under our lamppost gave a misleading impression of the whole scene. But imagining how a statement could turn out to be false does not make that statement implausible. Given the naturalism-friendly bias of our search, and our complete failure to find any example of the kind of laws that are needed in abundance to make Premise [5] false, it seems unreasonable to suppose that what we don’t know will completely reverse what we do know.

### Premise [6]

Given our restricted focus, naturalism is non-informative with respect to the fundamental constants. Naturalism gives us no reason to expect physical reality to be one way rather than some other way, at the level of its ultimate laws. This is because there are no true principles of reality that are deeper than the ultimate laws. There just isn’t anything that could possibly provide such a reason. The only non-physical constraint is mathematical consistency.

Having narrowed our focus to the Little Question, we treat the fundamental constants as we would treat the ultimate laws of nature if we had them. Because we are practically unable to put a non-informative probability distribution over mathematical equations and structures, we attempt to put a non-informative probability distribution over the fundamental constants of physics, treating the equations in which they appear as fixed background information. This, in the spirit of the Little Question, is doing the best we can with the information we have, and refusing to indefinitely postpone the Big Question until physics is finished.

As noted in Section 2, there is no agreed-upon all-purpose method for generating non-informative distributions, and indeed some scepticism that such a method even exists. Nevertheless, such distributions can be justified in particular situations. Keep in mind that Bayesian probabilities are not stochastic or frequentist; we are not looking for an experiment—even a hypothetical one—whose outcomes display frequencies that obey the distribution. We are instead tasked with finding a mathematical distribution that reasonably and honestly represents a given state of knowledge: the laws of nature (the standard models) contain constants, but neither those laws nor any other physical principle give us any further information about them. What values might we expect these constants to take?

With this distribution, the relevant likelihood is calculated as follows. Let \(U\) be the statement that a life-permitting universe exists, \(L\) that the laws of nature as we know them (the standard models) apply in the universe, \(N\) naturalism, and \(B\) any other background knowledge, including known mathematical theorems. Let the fundamental constants be represented by \(α_L\), where the subscript reminds us that fundamental constants qua fundamental constants have no theory-independent existence; without the laws \(L\), they are just physical quantities like any other. (This is similar to the notation in Barnes 2017.) The Little Question invites us to calculate the likelihood of our universe on naturalism by taking \(L\) as given and varying over the constants, which we do using the law of total probability:

$$p(U | LNB) = \int p(U | α_L L N B) ~ p( α_L | L N B) d α_L ~$$ | (3) |

where the (multi-dimensional) integral is over all values of \(α_L\) for which the non-informative probability distribution for the constants \(p(α_L | L N B)\)) is defined.

### Premise [7]

Physicists routinely assign non-informative probability distributions to fundamental constants, which we can use to calculate the likelihood that a life-permitting universe exists on naturalism. That is, physics gives us \(p( α_L | L N B)\)), as required by Equation (3).

Suppose that you have an experiment that has collected data D in an attempt to measure the Higgs vev (v), which is a fundamental constant of the standard model of particle physics. In the Bayesian way, we want to represent our state of knowledge via the posterior: the probability distribution of values of the constant, given the data, the laws and background information. Using Bayes theorem,

$$p(v|DLB) = \frac{p(D | v L B) ~p( v | L B)} { \int p(D | v L B) ~p( v | L B) d v} ~.$$ | (4) |

We require \(p( v | L B)\) for this calculation. More generally, we cannot turn any empirical data \(D\) into the posterior probability of the value of the constants in our universe \(p( α_L | D L B)\) without specifying a prior derived purely from the law of nature and theoretical background information \(p( α_L | L B)\).

Similarly, suppose that we want to calculate the posterior probability of a physical theory (\(L\)) using the Bayesian framework and some data \(D\) (or to compare competing theories). We need to calculate the likelihood of the data, but this will depend on the values of the constants. We marginalise over the constants \(α_L\), treating them as nuisance parameters. By the law of total probability,

$$p(D|LB) = \int p(D |α_L L B) ~ p( α_L | L B) d α_L ~,$$ | (5) |

where, as previously, the (multi-dimensional) integral is over all values of \(α_L\) for which \(p( α_L | L B)\) is defined. If \(p( α_L | L B)\) were undefined or completely unknown (i.e., not even approximated), then physicists wouldn’t be able to calculate likelihoods for any data \(D\). These theories would be unable to generate expectations, make predictions, or model data, and thus would fail to be testable.

In calculating the distribution \(p( α_L | L B)\), physicists do not assume metaphysical positions like naturalism and theism. This is commonly called methodological naturalism, in which physicists attempt to investigate nature without stipulating what reality may or may not lie beyond nature (see, for example Draper 2005). Regardless of the status of methodological naturalism in science, for our purposes the important point is that what physicists have calculated is identical to \(p( α_L | L B)\). The reason is that the physicist has not in fact taken any deeper physical or metaphysical principle into account when calculating \(p( α_L | LNB)\); nothing changes, then, if we suppose (with the naturalist) that there are no such deeper principles at all. The ordinary practice of physics provides \(p( α_L | L B)\), which is identical to the prior probability distribution that we need for Equation (3).

So, how do physicists actually generate this probability distribution? In particular, what do physicists do when the possible range of a constant is infinite in size? I have written at length on this topic elsewhere (Barnes 2018), and will summarise here. The key to \(p( α_L | L B)\) is that \(L\) appears on the right. As we have noted, fundamental constants qua fundamental constants have no theory-independent existence; they live inside the equations. Thus, while this probability distribution is not informed by naturalism, it is partially informed by the theory itself. Physical theories are free to introduce free parameters, but they have to be able to control them. In other words, to be testable, the theory must generate probabilities of data (likelihoods), and so from the equations above, must be able to justify prior probability distributions over its free parameters.

In practice, the physical constants fall into two categories. Some are dimensionful, such as the Higgs vev and cosmological constant (having physical units such as mass), and some are dimensionless pure numbers, such as the force coupling constants. For dimensional parameters, there is an upper limit on their value within the standard models. Famously, we do not know how to describe gravity within a quantum framework. A back of the envelope calculation shows that if a single particle were to have a mass equal to the so-called Planck mass (\(m_\textrm{Planck}\)), then it would become its own black hole. The point is not that we think that this would actually happen, but rather that a single particle black hole is a scenario where neither gravity nor quantum theory can be ignored. The Planck mass represents an upper boundary to any single-particle mass scale in our current theories. A lower boundary is provided by zero, since quantum field theory breaks down for negative masses; even if it didn’t, a lower bound would be given by \(m_\textrm{Planck})\). Thus, the standard models together restrict the value of \(v\) to the range \([0,m_\textrm{Planck})\), and \(\Lambda\) to the range \((-m_\textrm{Planck}^4,m_\textrm{Planck}^4)\) (Dine 2015; Weinberg 1989).

Within these finite ranges, the obvious prior probability distribution is flat between the limits, as other distributions need to introduce additional dimensionful parameters to be normalised.[6] In fact, quantum corrections contribute terms of order \(m_{Planck}^2\) and \(m_{Planck}^4\) to \(v^2\) and \(\rho_\Lambda\) respectively, meaning that the Planck scale is the natural scale for these parameters (Dine 2015). The smallness of these parameters with respect to the Planck scale is known in physics as the hierarchy problem and the cosmological constant problem respectively. This suggests that the prior distribution that best represents our state of knowledge for the Higgs vev is flat in \(v^2\).

For dimensionless numbers, we have a few cases. Some are phase angles, and so a flat prior over \([0,2\pi)\) is reasonable. Some, such as the Yukawa couplings, are connected to masses of particles and thus subject to the Planck scale upper limit. Others vary over an infinite range. But even in the case of a finite range, physicists do not usually postulate a flat prior. Rather, dimensionless parameters are expected a priori to be of order unity. This is the idea behind the definition of naturalness due to 't Hooft:

a physical parameter or set of physical parameters is allowed to be very small [compared to unity] only if the replacement [setting it to zero] would increase the symmetry of the theory. (1980: 135–136):

A number of heuristic (read: hand-waving) justifications of this expectation are referenced in Barnes (2018). In general, our state of knowledge can be approximated by a distribution that peaks at unity and smoothly decreases away from the peak, assigning a probability of 1/2 each to the regions less than and greater than unity.[7] As we will see below, this is sufficient for the upper-limit estimates we need.

I stress that these considerations arise within the ordinary practice of physics, as it formulates theories, collects data, and uses that data in the context of probability theory to test our ideas.

### Premise [8]

Using these distributions, the likelihood that a life-permitting universe exists on naturalism is vanishingly small (which establishes Premise [2]). First, we will consider the subset of parameter space that is life permitting, which has been investigated by the physics community over the last few decades. Note that not all of the 31 fundamental constants have interesting life-permitting limits. I list the interesting limits below; where possible, I have used limits that are independent of the other constants—this isolates a conservative, box-shaped region in the multi-dimensional parameter space. Unless otherwise noted, these limits come from Hogan (2000), Tegmark et al. (2006), Barnes (2012), Schellekens (2013), and references therein.

- Cosmological constant, expressed as a density (\(\rho_\Lambda\)): If \(\rho_\Lambda / \rho_{Planck} \lesssim -10^{-90}\), the universe would recollapse after 1 second; if \(\rho_\Lambda / \rho_{Planck} \gtrsim 10^{-90}\), structure formation would cease after 1 second, resulting in a uniform, rapidly diffusing hydrogen and helium soup (Adams, Alexander, Grohs, & Mersini-Houghton 2017).
- Higgs vev (\(v\)): If \(v^2/m_{Planck}^2 \lesssim 6 × 10^{-35}\), then hydrogen is unstable to electron capture; if \(v^2/m_{Planck}^2 \gtrsim 10^{-33}\) then no nuclei are bound and the periodic table is erased.
- Up-quark, down-quark and electron Yukawa couplings (\(y_u,y_d,y_e\): this three-dimensional subspace and the life-permitting subset is shown in Lewis and Barnes (2016: 256–260), building on Barr and Khan (2007). For stable atoms and stable stars, the region is bounded by: \(0 <y_u < 3 × 10^{-5}, 0.7 × 10^{-5} < y_d < 7 × 10^{-5}, 3 × 10^{-9} < y_e < 4 × 10^{-5}\).
- Strange quark Yukawa couplings: Jaffe, Jenkins, and Kimchi (2009) have suggested that the strange quark may need to be sufficiently heavier than the light quarks (up and down) to avoid composite particles that include the strange quark—including analogues of the proton and neutron and the hyperon (\(uuddss\))—from participating in and possibly destabilising atomic nuclei. However, a firm limit is not derived, and (as a lower limit only) would probably be relatively weak.
- Neutrino Yukawa couplings: if the sum of the neutrino Yukawa couplings is greater than \(5 × 10^{-12}\) (\(\sum m_\nu > 1\) eV), then galaxy formation is significantly suppressed by free-streaming. Further problems await very heavy neutrinos (\(> 1\) MeV): they overclose the universe, result in no hydrogen left over from the big bang, and affect the ability of supernovae to distribute elements into the wider universe.
- Fine-structure and strong force coupling constants: the size and shape of this two-dimensional subspace that is life permitting is shown in Barnes (2012). The relevant one-dimensional limits are as follows. If \(\alpha_s < 0.108\) (a 9% decrease; see also Pochet, Pearson, Beaudet, & Reeves 1991), then deuterium is unbound and stars fail to ignite (Adams & Grohs 2017b; Barnes & Lewis 2017); if \(\alpha_s > 0.27\), there is little hydrogen left over from the Big Bang for stars and organic molecules. If \(\alpha < 7.3 × 10^{-5}\), stars fail to be stable (though this limit is significantly weakened if the diproton is stable, as shown in Barnes 2015); if \(\alpha > v(y_d - y_u)/200\) MeV \(\approx 0.018\), the proton is heavier than the neutron, and the universe again has no hydrogen.
- Baryon (\(\xi_b\)) and dark matter (\(\xi_c\)) to photon ratios, and the scalar fluctuation amplitude (\(Q\)): Tegmark et al. (2006) summarise eight anthropic constraints on this subset of parameter space. For illustration purposes: if \(Q < 10^{-6}\), matter does not cool sufficiently to collapse into galaxies and stars; if \(Q > 10^{-4}\), galaxies may be too dense to permit long-lived planetary systems. These limits are probably too stringent; see Adams, Coppess, and Bloch (2015).
- Dimensionless spatial curvature (κ): if the early universe had been too positively curved and matter/radiation dominated, it would have recollapsed before any galaxies could form. If it had been too negatively curved and matter/radiation dominated, the expansion would have been too fast for galaxies to form.

Research continues to refine these limits. In particular, the work of Adams (2008), Adams and Grohs (2017b), Adams and Grohs (2017a), Grohs, Howe, and Adams (2018) and Barnes (2015), Barnes and Lewis (2017) has shown that some of the fine-tuning limits regarding stars from the earlier literature have needed to be refined. Stars, it turns out, are remarkably robust with regards to changes in nuclear and atomic physics. Cosmological limits, too, are being investigated using supercomputer simulations of galaxy formation (Barnes, Elahi, Salcido, Bower, Lewis, Theuns, Schaller, Crain, & Schaye 2018). Nevertheless, the fact and degree of the fine-tuning of the universe for life in the current physics literature is largely unchanged since it was first discovered by physicists in the 1970s and 80s (reviewed in Barrow & Tipler 1986).

We now use the distributions discussed under Premise [7] to calculate the likelihood that a life-permitting universe exists on naturalism. Given the context of our enquiry, which treats the fundamental constants as ultimate for the purposes of answering the Little Question, we treat the constants as independent of each other. Having identified box-shaped regions above, we can calculate the relevant individual likelihoods and multiply the results. We will only be interested in orders of magnitude.

- Cosmological constant: Given a uniform distribution over \(\rho_\Lambda\) between the Planck limits \((-\rho_{Planck},\rho_{Planck})\), the likelihood of a life-permitting value of the cosmological constant is at most \(10^{-90}\).
- Higgs vev: Given a uniform distribution over \(v^2\) between zero and the Planck mass \([0,m_{Planck}^2)\), the likelihood of a life-permitting value of the Higgs vev is at most \(10^{-33}\).
- Up-quark, down-quark and electron Yukawa couplings: for simplicity, I will ignore the lower limits, effectively setting them to zero. As discussed above, for dimensionless parameters, we expect a distribution that peaks at unity and smoothly decreases away from the peak. For values less than one, the value assigned by this distribution will be less than that assigned by a uniform distribution[8] on [0, 1]. The likelihood of life-permitting up-quark, down-quark and electron Yukawa couplings is at most \(10^{-13}\).
- Strange quark Yukawa couplings: given the uncertainty regarding this limit, I do not attempt to estimate a likelihood.
- Neutrino Yukawa couplings: using a similar calculation to that above, the likelihood of all three neutrino Yukawa couplings being life permitting is at most \(10^{-33}\), since (obviously) all three values must be smaller than their sum. There is a snag here, however: unlike the down-quark and electron, massless neutrinos are life permitting. Further, massless fundamental particles are physically possible; the photon, for example, is massless. A continuous distribution, however, assigns zero probability to any particular value, including the value zero. We would need to augment our probability distribution by adding, for example, a Dirac delta-function at zero mass. More problematically, the mechanism by which neutrinos acquire mass is still uncertain, but it cannot be exactly the same mechanism as for the quarks and leptons (see the popular-level explanation in Murayama 2002). Given the focus of the Little Question on settled, well-understood physics, I will shelve the neutrino masses.
- Fine-structure and strong force coupling constants: these parameters are roughly of order unity, and so their likelihoods are not particularly small. I will ignore their contribution.
- Baryon (\(\xi_b\)) and dark matter (\(\xi_c\)) to photon ratios: we face a problem with these parameters. The baryon mass per photon is widely believed to be set by the process of baryogenesis, which creates the matter-antimatter symmetry of our universe: \(\xi_b \sim m_{proton} \eta\), where \(\eta = (n_b - n_{\bar{b}})/n_\gamma\) is the dimensionless Baryon asymmetry parameter in the early universe. The standard model does not contain sufficient matter-antimatter asymmetry to create the observed asymmetry of our universe, but there is not as yet a widely-accepted and successful theory of baryogenesis. The situation is even worse for the dark matter mass to photon ratio, since we don’t know the identity of the dark matter particle, its mass, or how it is produced in the early universe. I strongly suspect fine-tuning here, but in the absence of a successful theory, I will shelve these parameters.
- The scalar fluctuation amplitude (\(Q\)): if we treat \(Q\) of order unity, as with other dimensionless parameters, then likelihood of a life-permitting \(Q\) is not particularly small: \(10^{-4}\) at least, and probably larger. As noted in Footnote 7, assuming \(Q \sim 1\) supposes that the universe is roughly homogeneous, which is to assume that it is in a low-entropy state. I will shelve this parameter too.
- Dimensionless spatial curvature (\(\kappa\)): this is a special case, as there are reasonable theoretical reasons to give significant probabilistic weight to \(\kappa = 0\), which is a life-permitting value (Carroll & Tam 2010; Gibbons & Turok 2008; Hawking & Page 1988). This is independent of cosmic inflation, which aims to provide a dynamical mechanism that sets \(\kappa \approx 0\). Whether inflation itself is fine-tuned is an interesting question (I have argued in Barnes 2012 that it probably is), but regardless, we are not justified in assuming the kind of dimensionless, order-unity distribution discussed in regard to Premise [7]. I do not attempt to estimate a likelihood.

Combining our estimates, the likelihood of a life-permitting universe on naturalism is less than \(10^{-136}\). This, I contend, is vanishingly small.

### Premise [3]

The likelihood that a life-permitting universe exists on theism is not vanishingly small. At this point, it is tempting to jump straight to the analogue of the Big Question for theism: of all the possible ways that a physical universe could have been, is our universe what we would expect on theism?

However, this would be to compare apples with oranges. From Bayes theorem, a fair comparison between two theories on the basis of their relative likelihoods must keep the same evidence \(E\) and background information \(B\) for both. Thus, we must instead ask the analogue of the Little Question for theism: of all the possible ways that the fundamental constants of the universe could have been, is our universe what we would expect on theism?

On theism, the properties of the universe—including its fundamental constants— are the result of the actions of an agent, a mind with intentionality. We expect, then, that there is a reason (or reasons) why the universe has its particular set of properties. This reason may involve the consequences of these properties, that is, how fundamental properties actually play out in the universe. This kind of foreknowledge of consequences is a hallmark of intelligence. Our expectations over the set of possibilities—in this case, represented by the set of the fundamental constants— are not non-informed, but can privilege particular subsets of fundamental constants according to the kind of universe that results.

This significantly transforms the predicament of theism relative to naturalism. A likelihood that is non-informative with respect to the fundamental constants is not justified in the case of theism because we expect a reason for the universe. At worst, the likelihood is non-informative with respect to the reason for the universe.

But, what reason? It could be that God wants to avoid making physical life. Or prefers black holes. Or has a particular fondness for vast empty spaces. Or just wants to make something complex, whatever the form. Or no complexity at all. Or chooses the constants randomly, according to the same non-informative distribution that characterises naturalism. What affect does this uncertainty have on this premise?

Consider the following simple probabilistic model. Suppose you think of \(n\) possible primary reasons for the universe, counting \(X\) and “not \(X\)” as separate reasons. For completeness, let the \(n\)-th reason be “all the possible primary reasons not on the list, or no reason at all.” Let \(G\) represent the statement God exists and created a universe[9]. The \(n\) reasons partition this statement:[10] \(G = \sum_i G_i\). Suppose that \(G_1\) is “God exists and created a universe and his primary reason was to create physical life.”

Then, the likelihood that a life-permitting universe exists on theism, given our restricted focus, is,

$$p(U | GLB) = \sum_i p(U|G_i LB) p(G_i | G LB) ~,$$ | (6) |

Given God’s omnipotence, if God intends to create a life-permitting universe, then a life-permitting universe will exist: \(p(U|G_1 LB) = 1\). (\(L\) makes no difference, but is retained for completeness.) To calculate a lower limit, suppose that none of the other reasons will result in a life-permitting universe, even as a side-product, that is, despite not being the primary reason: \(p(U|G_i LB) = 0\), for all \(i \neq 1\). Then,

$$p(U | GLB) = p(G_1 | G LB) ~.$$ | (7) |

Now, what is the probability, given that God exists and created a universe, that God’s primary reason would be to create a life-permitting universe? Positive arguments for a non-negligible value for \(p(G_1 | G LB)\) that appeal to God’s goodness and the moral worth of embodied moral agents can be found in, for example, Swinburne (2004) and Collins (2009). But even if we consider theism to be completely non-informative about God’s possible reasons for creating, we would (in this simple model) not be justified in assigning a probability that is smaller than \(\sim 1/n\). I contend that there are not, in fact, \(\sim 10^{136}\) possible reasons for God to create that have comparable plausibility to that of a life-permitting universe. Unless the naturalist can produce a positive argument (not mere skepticism) to show that \(p(G_1 | G LB)\) is extremely small, zero, or inscrutable, the likelihood that a life-permitting universe exists on theism is not vanishingly small.

## 4. Answering Objections

Chapter 7 of Lewis and Barnes (2016) replies to fourteen objections to the concept of the fine-tuning of the universe for life, including such classic hits as ‘we’ve only observed one universe’, ‘low probability events happen all the time’, ‘evolution will find a way’, ‘we don’t know the necessary conditions for life’, ‘so much of the universe is inhospitable’, ‘fine-tuners only turn one dial at a time’, ‘why think that life is special’, and ‘there could be other forms of life.’ I will not repeat our answers to these objections here. Note that this chapter was written by an atheist (Lewis) and a theist (present author) in collaboration. Our answers there defend fine-tuning for life in physics, rather than the FTA.

We will complement the excellent recent defence of the FTA set out by Hawthorne and Isaacs (2017; 2018), who comment that the FTA is “as legitimate an argument as one comes across in philosophy.”[11] Here, I will focus on how some objections to the FTA are specifically addressed by the formulation of the argument presented above. In addition, Hawthorne and Isaacs formulate a multi-purpose reply that I will call the Awesome Theistic Argument test (ATA; also deployed by Leslie 1989; Mawson 2011: amongst others). It seems obvious that “the opening of the Gospel of John written onto the interior of every atom” would be very good evidence for God. Thus, any objection that would “look foolish” in the face of the ATA must also fail against the FTA unless there is some relevant difference between the cases. I will note briefly below where an objection fails the ATA test.

### 4.1. Objection A: Deeper Laws

No physicist believes that the fundamental constants of nature are the last word in physics. The constants and initial conditions just mean we aren’t done yet. We should follow the advice of physicist David Gross: “never, never, never give up!” (quoted in Woit 2007: 10). Science will progress, and the puzzle of the fine-tuning of the universe for life will be solved like so many scientific puzzles before it.

In reply, our FTA does not imply that physicists should stop doing physics. Certainly, I have no intention of doing so. The unexplained values of the fundamental constants, fine-tuned or not, are an opportunity for a deeper theory of physics to step into the limelight. If you have a simple, successful theory that predicts the mass of the electron, publish it! And then book your ticket to Stockholm.

However, this objection ignores the context of the argument: we are doing metaphysics, not science. We are asking whether the totality of what we know about the physical universe is rendered more likely by naturalism or theism. We want to know whether there is anything other than the physical universe, not what the laws and contents of our universe are. I am not claiming that God is a better explanation of the fundamental constants than all future theories of physics. I am claiming that God is a better explanation of the universe than naturalism.

In this context, this objection may amount to the claim that we can only think about naturalism when physics is finished: “Big Question or bust!”. As already noted, this would “defend” naturalism by neutralising any possible reason for or against it. We would be left with no reason to believe that naturalism is true.

The objector may be assuming that the advance of science will necessarily reduce the apparent degree of fine-tuning, eventually overturning Premise [8]. This is false. Science progresses by finding a better match between theory and data; fine-tuning is not that kind of problem. A deeper, more successful theory of fundamental physics and cosmology could very easily make the universe more fine-tuned, not less. For example, the discovery of the non-zero value of the cosmological constant by Riess, et al. (1998) and Perlmutter, et al. (1999) was undoubtedly progress in more accurately describing the expansion and contents of our universe. It also cemented the best case of fine-tuning we have. I welcome deeper investigation into the fundamental constants; I suspect that more fine-tuning will be found. I am watching the baryogenesis literature with bated breath.

The objector may instead claim that the progress of science is likely (rather than certain) to significantly decrease the degree of fine-tuning, or at least that such a scenario is not very unlikely. But this is precisely what we don’t know, because we don’t know what the deeper laws of physics are. If we knew that, we would have the information we need to formulate a better Little Question. The mere possibility of less finely-tuned deeper laws does nothing to our argument, because it is balanced by the possibility of more finely-tuned deeper laws. This uncertainty cancels out, and we are left with the fact that our best handle on the relevant likelihood is the Little Question.

What, then, is the effect of future science on our argument? Each advance in fundamental physics will provide a new opportunity to formulate a “Little Question 2.0”, and to re-evaluate what our best information tells us about naturalism and theism. But consider the most optimistic scenario, in which we achieve Einstein’s dream of a theory without any free parameters. This would render our approach—of formulating a Little Question by varying the constants of nature—impossible. The question of fine-tuning of the constants would be answered, but the Big Question would remain: is this universe, with its constant-free final theory, what we would expect on naturalism? We still need to ask this question, and to think about how the universe could have been. If we again find ourselves inquiring after possible ultimate laws of nature, we will need to consider alternative equations or symmetries or mathematical structures, rather than simply alternative constants. While we have not worked out how to systematically approach this question, it may turn out that most possible laws and symmetries and structures are life-prohibiting, in which case a slightly-tweaked version of the FTA still succeeds. It is question-begging to assume that the progress of physics will save naturalism.

### 4.2. Objection B: The Multiverse

There exists a vast ensemble of universes, with varying constants and local initial conditions, of which our universe is just one. Then, the explanation of the fine-tuning of the constants of nature is two-fold. Firstly, given a large enough number of other universes with sufficiently variegated properties, the right conditions for life are likely to turn up somewhere. And secondly, of course, intelligent physical life forms could only exist in a universe where just the right conditions prevailed. The fine-tuning of the universe for life is just our local slice of luck.

Needless to say, the multiverse has divided opinion. It has been claimed that inflationary theories of cosmology naturally produce a universe that is divided into subdomains with different physical properties. A history and defence of the inflationary multiverse can be found in Linde (2015). Critique of the multiverse has focussed on a number of issues. Ellis and Silk (2014) have argued that the multiverse is in principle beyond adequate empirical testing, and so can never rise above the level of mathematical speculation. In particular, Carr and Ellis warn that the key universe-generating physics in inflationary models of the multiverse is an “extrapolation [that] is unverified and indeed unverifiable. The physics is hypothetical rather than tested. We are being told that what we have is ‘known physics \(\rightarrow\) multiverse.’ But the real situation is ‘known physics \(\rightarrow\) hypothetical physics \(\rightarrow\) multiverse’” (2008: 34). Opinion about the status of cosmic inflation is also divided; criticism of the theory in Scientific American by Ijjas, Steinhardt, and Loeb (2017) prompted a letter in defence of inflation signed by prominent physicists.

In particular, the multiverse faces the measure problem, about which there is an extensive literature. Many multiverse theories imply or assume that there are an infinite number of other sub-universes. But “in an infinite universe,” says Olum (2012: 6), “everything which can happen will happen an infinite number of times, so what does it mean to say that one thing is more likely than another?”. Olum (2012) argues with considerable force that because it is impossible to assign probabilities to an infinite number of things (regions, observers, etc.) in a way that is unchanged by simply shuffling their arbitrary labels, the measure problem is unsolvable. An infinite multiverse theory cannot justify probabilities and so cannot make predictions. These are open questions; see, among many others, Vilenkin (1995), Garriga, Schwartz-Perlov, Vilenkin, and Winitzki (2006), Aguirre, Gratton, and Johnson (2007), Vilenkin (2007b), Vilenkin (2007a), Gibbons and Turok (2008), Page (2008), Bousso, Freivogel, and Yang (2009), De Simone, Guth, Linde, Noorbala, Salem, and Vilenkin (2010), Freivogel (2011), Bousso and Susskind (2012), Garriga and Vilenkin (2013), Carroll (2017), Page (2017).

I stress these points to combat glib appeals to modern physics and cosmology as having established, or even produced a coherent model of, the multiverse.

For our purposes here, the important question is, can we use the multiverse to attack the Big Question? Can we use physical models of a multiverse to pose a Little Question 2.0, which better approximates the Big Question?

No, for several reasons. There is no standard multiverse model whose parameters we can vary. Cosmologists have not arrived at a model for the multiverse that, like the standard models, is known to account well for the data we have, is widely accepted to be better than its competitors, or has well-constrained fundamental parameters. We instead have a menagerie of bespoke, proof-of-concept, cherry-picked toy models, which add most of the important physics by hand, have almost no record of successful predictions, and were formulated with one eye on the fine-tuning problem. This is in stark contrast to the standard models, which underpin the Little Question.

Furthermore, cosmologists have trouble handling probabilities within a given multiverse model. As noted in Barnes et al. (2018), the measure cannot be a degree of freedom of a multiverse model. A specific multiverse model must justify its measure on its own terms, since the freedom to choose a measure is simply the freedom to choose predictions ad hoc. The measure problem is symptomatic of the fact that most multiverse models are toy models: a measure must be parachuted in, as the models lack the intestinal fortitude to generate predictions on their own terms. In light of this, there are dim prospects for anyone who wants to assign probabilities across broad classes of multiverse models.

The current set of multiverse models are worthy of investigation; my collaborators and I have invested hundreds of thousands of supercomputer hours into testing multiverse predictions of the cosmological constant (Barnes et al. 2018). But even if one strongly suspected that a multiverse exists, these models simply cannot tell us what we would expect of a typical multiverse. They can’t offer a well-posed Little Question 2.0. It might turn out that, in a representative set of possible multiverses, our universe is still not what the naturalist would expect. Maybe a life-permitting multiverse requires fine-tuning of its parameters. We don’t know. No one knows the identity of the field that causes inflation, its properties, the possible range of those properties, the likely initial state of the field, the mechanism that varies the other constants (such as the mass of the electron and the strength of electromagnetism) across the multiverse, or the distribution of that variation—and that’s just for models of the multiverse that depend on inflation. Those are supposed to be the best multiverse models.

Those who disagree with this diagnosis are asked to choose a few cosmologists who advocate the multiverse, and ask them to write down a list of the fundamental equations of the multiverse, identify the fundamental constants in those equations, summarise our best observational constraints on those parameters, and tell us the model’s predictions for the 31 constants of the standard models. There will be little agreement on the new fundamental equations and constants, most parameters on any given list will be largely if not completely unconstrained by data, and predictions will be, at best, weak and highly model-dependent.

In the context of our argument, the multiverse is not directly in competition with theism. A convincing, natural, elegant multiverse model will not replace God. It will allow the naturalist to pose a Little Question 2.0, in the hope that naturalism will win the rematch. We’ll calculate the relevant distributions, run some more supercomputer simulations, and fire up the probability calculus— it’ll be great fun. Until then, the multiverse reply to the FTA is appealing to assumptions about a physical theory that we do not have.

### 4.3. Objection C: Normalisability

We cannot conclude that the likelihood of a life-permitting universe on naturalism is small, because the relevant probability measure must attempt to spread itself evenly over an infinitely large range. This is impossible: such a distribution cannot be normalised, and so cannot be a probability distribution.

I have written at length elsewhere about the Normalisability objection in the context of the fine-tuning of the universe for life (Barnes 2018). I will apply this response specifically to the FTA.

Firstly, if one agrees with the usefulness of the Little Question—that we should try to address the Big Question, and that varying the constants is a promising approach—then this is a mere technical setback. If, in our ignorance, we have failed to correctly set up the problem to be solved and do not have appropriate probability distributions, then we should simply try again. There is no in-principle objection that would lead us to abandon the entire project.

As argued in Barnes (2018) and outlined above, the two conditions for the normalisability problem—infinite range, flat distribution—are not forced on us for the fundamental constants of nature. Dimensional parameters are bounded by the Planck scale; they cannot vary over an infinite range. This restriction comes from the standard models themselves (\(L\)), which are taken as given when we calculate the likelihood in the context of the Little Question. This restriction does not postulate that some principle of logical or metaphysical possibility applies to these quantities, but is rather a consequence of the fact that their status as fundamental constants is bestowed by the standard models, and those models are only mathematically well-defined within the Planck limits.

Similarly, we are not forced by any principle to attempt to place a flat distribution from \([0,\infty)\) on dimensionless parameters. It is not unreasonable to place a higher expectation on order unity values, which allows us to choose from a range of normalisable distributions. As a worst-case scenario, we could simply abandon cases of fine-tuning based on dimensionless parameters: our estimate of \(p(U | LNB)\) in Premise [2] is still \(10^{-123}\).

Furthermore, as noted, these distributions are very closely related to those that physicists use to infer the values of the fundamental constants from data and to test physical theories by calculating their likelihoods. The Normalisability Objection, if successful, would apply just as successfully to the calculation of \(p(D | LB)\) for any data, not just the fact that our universe is life permitting on naturalism. Manoeuvres that are used by the physicist to avoid normalisability problems can be used by the defender of the FTA.

### 4.4. Objection D: God Wouldn’t Fine-Tune

This objection comes in a few flavours, but essentially argues that the picture of God fine-tuning the universe is inconsistent with the idea of God as an all-powerful creator. Why would God make a universe that requires careful adjustments of the Higgs vev dial, when God could create a universe that didn’t need such fine adjustments, or any such adjustments, or in which life existed in spite of the lack of such adjustments?

Firstly, as noted in our discussion of Premise [3], the restriction of our attention to the fundamental constants of the standard models is purely methodological. We must hold the evidence E to be the same for the likelihoods on naturalism and theism, and so we consider God’s action in the limited context of changes to the constants of nature. No limit on the true power of God is assumed or implied.

I will consider the versions of this objection offered by different authors. Halvorson has argued that “since God can choose the laws of nature, God can set the chances that a universe like ours would come into existence. ... if God could be expected to create a nice universe, then God could also be expected to set favourable chances for a nice universe. Therefore, the fine-tuning argument defeats itself” (2018: 122). I have responded to the argument of Halvorson in Barnes (2018). To summarise, chances play no role in our argument whatsoever.

And even if they did, Halvorson’s claim that the chances of initial conditions are set by the theory is simply mistaken. Even if the theory puts a measure over those conditions, this does not imply the assignment of chances to those initial conditions. I can falsify Halvorson’s claim directly: I run computer simulations of universes, and I don’t choose the initial conditions via a chancy process that respects a measure derived from the theory. I pick whatever initial conditions I like. God could do the same.

Weisberg considers two options open to a designer of the traditional sort: a) create a universe whose laws need fine-tuning in order to be life permitting, or b) create a universe with laws whose conditions and parameters need no tuning, that “would generate intelligence no matter how the parameters and conditions were set, or laws that would generate intelligent life on most settings” (2010: 433). Because there is no reason to expect the designer to choose a fine-tuned cosmology, “a fine-tuned cosmology seems no more likely given a designer than given not.”

This objection asks us to deny Premise [5], forsaking the safe haven of the standard models and venturing into the wild unknown of other possible laws of nature. Suppose we can identify some set \(M\) of potential ultimate mathematical laws of nature with which to pose what we’ll call the Medium Question: of all the possible universes represented by the set of laws \(M\), is our universe what we would expect on naturalism?

We have two pieces of evidence to calculate the likelihood of: a life-permitting universe exists (\(U\)), and the laws of that life-permitting universe are the standard models (\(L\)). Then, (suppressing \(B\) in our notation),

$$\frac{p(UL|MN)}{p(UL|MG)} = \frac{p(U|LMN)}{p(U|LMG)} ~ \frac{p(L|MN)}{p(L|MG)} ~.$$ | (8) |

In short, we ask for the likelihood of our universe given our laws of nature, and then the likelihood of our (fine-tuned) laws of nature.

So far, we have argued that the first ratio on the right side of Equation (8) is very small (\(10^{-136}\), give or take; \(M\) makes no difference). Then, if we follow the advice of Weisberg that “a fine-tuned cosmology seems no more likely given a designer than given not” (2010: 433), we will set \(p(L|MN) = p(L|MG)\), so that the second ratio is equal to one. The likelihood is unchanged, as is the FTA. To make this objection work, the objector would have to argue that the likelihood of our fine-tuned laws on theism is \(\sim 10^{136}\) times smaller than likelihood of our fine-tuned laws on naturalism.

The formal argument of Weisberg (2010) presents the FTA as depending on the premise (in our notation) \(p(L|UG) > p(L|UN)\). We must take \(U\) as given because it is old data: we’ve always known that our universe is life permitting. This comes from applying the chronological mandate (Section 2), which as we have seen does not follow from any principle of Bayesian probability theory. Every likelihood calculates the probability of some fact that we already know. But even applying the mandate, in this expanded context, the FTA should calculate the likelihood of all the evidence (\(LU\)), not just the latest piece. In other words, we calculate \(p(LU|G) = p(L|UG) p(U|G)\) and \(p(LU|N) = p(L|UN) p(U|N)\). And behold, the small likelihood of a life-permitting universe on naturalism returns (\(p(U|N)\)).

It is worth stressing that when Weisberg (2010: 433) appeals to the possibility of “laws that would generate intelligence no matter how the parameters and conditions were set, or laws that would generate intelligent life on most settings”, we know of exactly zero such laws in theoretical physics. As we pointed out above under Premise [5], physicists are perfectly capable of discovering such laws—we need only paper and pencil. And yet, none are known. Of course, God would know about non-fine-tuned-life-permitting laws, if they exist at all. But there is no reason for God to prefer them. Indeed, given the plausibly limited palette of such universes, they may have other limitations that make a fine-tuned universe preferable.

Taking this kind of objection one step further, why would God need laws at all? Carroll argues that “the physical world could behave in any way it pleases; God could still create ‘life,’ and associate it with different collections of matter in whatever way he may choose” (2016: 310).

As noted in the defence of Premise [3], the FTA does not claim that our universe is likely on theism, or that we can divine God’s intentions with any great certainty. Doubts about how God would be expected to act are entirely consistent with the claim that the likelihood that a life-permitting universe exists on theism is not vanishingly small. A positive case for natural laws is presented by Swinburne (2004): what we call the physical universe is the public space by which moral agents interact. It obeys predictable laws so that actions have somewhat predictable consequences, which is a prerequisite for them to be morally-meaningful actions. By contrast, Carroll’s imagined universe is bizarre— matter would behave in a predictable but life-prohibiting way, right up until it dared to pass too close to a life-form, at which point X-rays would swerve, nuclei would miraculously stick together, and one lucky patch of gas might spontaneously ignite into a star, for reasons completely opaque to scientific enquiry. Why would God make that universe, rather than one with consistent, discoverable laws? Such considerations, even if taken seriously, have a negligible effect on Premise [3].

### 4.5. Objection E: Too Fine-Tuned for God

Carroll presents this objection as follows:

If the reason why certain characteristics of the universe seem fine-tuned is because life needs to exist, we would expect them to be sufficiently tuned to allow for life, but there’s no reason for them to be much more tuned than that The entropy of the universe, for example [seems] much more tuned than is necessary for life to exist. ... [F]rom purely anthropic considerations, there is no reason at all for God to have made it that small. We therefore think there is some dynamic, physics-based reason why the entropy started off with the fine-tuned value it did. (2016: 311)

This objection, in a similar way to Objection A, misrepresents theism as a would-be scientific theory. If there is a physics-based reason for the low entropy of our universe, then we’ll just start again with a Little Question 2.0 and see what happens.

If naturalism is true, then there is some set of ultimate, unexplainable properties of physical reality. Would naturalism appear plausible if there were no deeper theories to appeal to, if the Big Question could no longer be postponed? To try to answer this, we posed the Little Question. When we take the standard models as given, if there is no reason for additional fine-tuning on theism, then there is no reason for it on naturalism either. This is because there is no reason for any ultimate fact about our universe on naturalism. In the Bayesian framework, this extra fine-tuning cancels out, making no difference to the likelihood ratios. A theory cannot taunt another theory with a fact that they explain equally well (or, in this case, equally poorly).[12]

Could there be a theistic reason for additional fine-tuning? Carroll (2016) says that there could be, but all such reasons will be ad hoc, and reflect only the fact that theism is poorly defined and thus infinitely flexible. It is worth remembering that the naturalist cannot make this argument and complain that God hasn’t left enough evidence of God’s existence; this is precisely a claim about God’s probable intentions and actions.[13] The reader will have to decide. For example, low entropy initial conditions over the observable universe (as opposed to merely in our Solar System, for example) are necessary for our beautiful night sky, from what we see with our naked eye to our biggest telescopes. On a clear night, far away from city lights, try staring deeply into the Milky Way for a while and see if you’re compelled to shout, “not worth it!”

Finally, this objection fails the ATA test: “the first 13 verses of the Gospel of John written on every atom would have been enough to convince us; there is no reason for God to write the 14th verse. But we observe 14 verses. So we should look for a physics-based explanation.”

### 4.6. Objection F: Inscrutable God

Manson (2018) defends the contention that, for certain people (‘fine-tuning sceptics’), the probability that there is a life-permitting universe if God exists is inscrutable. Thus, Premise [3] is false, and the argument fails.

Under what conditions are we permitted to declare that the probability of a proposition is inscrutable? This must be more than your average scepticism. After all, the whole point of probability is to deal with uncertainty, so merely declaring “I have no idea what God would do” is not enough to establish inscrutability. The fine-tuning sceptic is not claiming that the probability is zero, so we are presumably not looking for some logical or metaphysical necessity that would prevent God, if God exists, from creating a life-permitting universe.

Well then, what? What is this principle of inscrutability, stronger than scepticism but weaker than logical or metaphysical necessity?

This is important, because this principle would need to be incorporated into all Bayesian reasoning. Recall that all theory testing is theory comparison, so we need to understand the space of theories that are alternatives to some given \(T\) in Equation (2). If one of the alternative theories that goes into the \(\bar{T}\) terms has an inscrutable prior or generates an inscrutable likelihood, then \(p(T|EB)\) is inscrutable for any theory and for any evidence. In practice, most scientists would simply set the prior probability or likelihood of an inscrutable theory to zero, and thereby ignore it. But applying this to the FTA amounts to the declaration that we should reason as if the probability of God creating a life-permitting universe is zero, which seems far too strong to be justified by mere scepticism. If the Bayesian can’t set inscrutable probabilities to zero, we need a principled way to exclude inscrutable theories before we start. Probability textbooks are all missing a chapter near their beginning. What are these principles, how do they relate to the basics of the Bayesian approach, and how has probability theory managed for so long without them?

Manson (2018: 3–4) provides four quotes from sceptics, but close inspection finds that they all fall short of claiming inscrutability. Oppy says “it is not clear to me that there is very much that one can conclude about the kind of universe that the designer is likely to produce.” Narveson says, “there is no credible reason why He would have done it one way, or another, or for that matter – worse yet – at all.” Sober says, “if this designer is so different, why are we so sure that he would build the vertebrate eye in the form in which we find it?” Gould says, “If disembodied mind does exist ... , must it prefer a universe that will generate our earth’s style of life?”

None of these claims amount to denying Premise [3]. One does not need to conclude much about God, or have a credible reason for God’s action, or be “so sure” about God’s motives, or know what God must prefer, in order to affirm Premise [3].

Finally, this objection fails the ATA test. Manson contends that the fine-tuning sceptic can limit the extent of their judgement of inscrutability, so that while being unconvinced by the FTA, they could agree that “there would be evidence of God’s existence if, for example, the stars miraculously rearranged themselves to spell out the Nicene Creed” (2018: 5). And yet a starry Nicene sceptic could block this argument by claiming that the probability of “We believe in one God, the Father Almighty, Maker of all things visible and invisible . . . ” appearing in the night sky if God exists is inscrutable. This, if anything, is more plausible than declaring that the probability of a life-permitting universe on theism is inscrutable, and yet the conclusion is absurd. If the starry Nicene sceptic would be irrational to block that argument by appealing to inscrutability, then the fine-tuning sceptic must also be irrational.

## 5. Conclusion

What physical universe would we expect to exist, if naturalism were true? To systematically and tractably explore other ways that the universe could have been, we vary the free parameters of the standard models of particle physics and cosmology. This exercise could have discovered that our universe is typical and unexceptional. It did not. This search for other ways that the universe could have been has overwhelmingly found lifelessness.

In short, the answer to the Little Question is no. And so, plausibly and as best we can tell, the answer to the Big Question is no. The fine-tuning of the universe for life shows that, according to the best physical theories we have, naturalism overwhelmingly expects a dead universe.

## Acknowledgments

Thanks to Nevin Climenhaga, Allen Hainline, Ryan Sanden and two anonymous referees for useful comments. LAB is supported by a grant from the John Templeton Foundation. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author and do not necessarily reflect the views of the John Templeton Foundation.

## References

- Adams, Fred C. (2008). Stars in Other Universes: Stellar Structure with Different Fundamental Constants. Journal of Cosmology and Astroparticle Physics, 2008(08), 010. https://doi.org/10.1088/1475-7516/2008/08/010
- Adams, Fred C. and Evan Grohs (2017a). On the Habitability of Universes without Stable Deuterium. Astroparticle Physics, 91, 90–104. https://doi.org/10.1016/j.astropartphys.2017.03.009.
- Adams, Fred C. and Evan Grohs (2017b). Stellar Helium Burning in Other Universes: A Solution to the Triple Alpha Fine-Tuning Problem. Astroparticle Physics, 87, 40–54. https://doi.org/10.1016/j.astropartphys.2016.12.002
- Adams, Fred C., Katherine R. Coppess, and Anthony M. Bloch (2015). Planets in Other Universes: Habitability Constraints on Density Fluctuations and Galactic Structure. Journal of Cosmology and Astroparticle Physics, 2015(09), 030. https://doi.org/10.1088/1475-7516/2015/9/030
- Adams, Fred C., Stephon Alexander, Evan Grohs, and Laura Mersini-Houghton (2017). Constraints on Vacuum Energy from Structure Formation and Nucleosynthesis. Journal of Cosmology and Astroparticle Physics, 2017(03), 021. https://doi.org/10.1088/1475-7516/2017/03/021
- Aguirre, Anthony, Steven Gratton, and Matthew C. Johnson (2007). Hurdles for Recent Measures in Eternal Inflation. Physical Review D, 75(12), 123501. https://doi.org/10.1103/PhysRevD.75.123501
- Albert, David Z. (2015). After Physics. Harvard University Press.
- Barnes, Luke A. (2012). The Fine-Tuning of the Universe for Intelligent Life. Publications of the Astronomical Society of Australia, 29(4), 529–564. https://doi.org/10.1071/AS12015.
- Barnes, Luke A. (2015). Binding the Diproton in Stars: Anthropic Limits on the Strength of Gravity. Journal of Cosmology and Astroparticle Physics, 2015(12), 050. https://doi.org/10.1088/1475-7516/2015/12/050
- Barnes, Luke A. (2017). Testing the Multiverse: Bayes, Fine-Tuning and Typicality. In Khalil Chamcham, Joseph Silk, John D. Barrow, and Simon Saunders (Eds.), The Philosophy of Cosmology (447–466). Cambridge University Press.
- Barnes, Luke A. (2018). Fine-Tuning in the Context of Bayesian Theory Testing. European Journal for Philosophy of Science, 8(2), 253–269. https://doi.org/10.1007/s13194-017-0184-2.
- Barnes, Luke A. and Geraint F. Lewis (2017). Producing the Deuteron in Stars: Anthropic Limits on Fundamental Constants. Journal of Cosmology and Astroparticle Physics, 2017(7), 036. https://doi.org/10.1088/1475-7516/2017/07/036
- Barnes, Luke A., Pascal J. Elahi, Jaime Salcido, Richard G. Bower, Geraint F. Lewis, Tom Theuns, . . . , Joop Schaye (2018). Galaxy Formation Efficiency and the Multiverse Explanation of the Cosmological Constant with EAGLE Simulations. Monthly Notices of the Royal Astronomical Society, 477(3), 3727– 3743. https://doi.org/10.1093/mnras/sty846
- Barr, S. M. and Almas Khan (2007). Anthropic Tuning of the Weak Scale and of mu/md in Two-Higgs-Doublet models. Physical Review D, 76(4), 045002. https://doi.org/10.1103/PhysRevD.76.045002
- Barrow, John D. and Frank J. Tipler (1986). The Anthropic Cosmological Principle. Clarendon.
- Bousso, Raphael and Leonard Susskind (2012). Multiverse Interpretation of Quantum Mechanics. Physical Review D, 85(4), 045007. https://doi.org/10.1103/PhysRevD.85.045007.
- Bousso, Raphael, Ben Freivogel, and I-Sheng Yang (2009). Properties of the Scale Factor Measure. Physical Review D, 79(6), 063513. https://doi.org/10.1103/PhysRevD.79.063513
- Carr, Bernard J. and George Francis Rayner Ellis (2008). Universe or Multiverse? Astronomy & Geophysics, 49(2), 2.29–2.33. https://doi.org/10.1111/j.1468-4004.2008.49229.x
- Carroll, Sean M. Turtles Much of the Way Down. Retrieved from http://www.preposterousuniverse.com/blog/2007/11/25/turtles-much-of-the-way-down
- Carroll, Sean M. (2016). The Big Picture. Dutton.
- Carroll, Sean M. (2017). Why Boltzmann Brains Are Bad. ArXiv e-prints. Retrieved from http://arxiv.org/abs/1702.00850
- Carroll, Sean M. and Heywood Tam (2010). Unitary Evolution and Cosmological Fine-Tuning. ArXiv e-prints. Retrieved from http://arxiv.org/abs/1007.1417
- Caticha, Ariel (2009). Quantifying Rational Belief. AIP Conference Proceedings, 1193, 60–68. https://doi.org/10.1063/1.3275647
- Climenhaga, Nevin (2019). (Epistemic) Probabilities Are Degrees of Support, Not Degrees of (Rational) Belief. Manuscript in preparation.
- Collins, Robin (2009). The Teleological Argument: An Exploration of the Fine-Tuning of the Universe. In William Lane Craig and J. P. Moreland (Eds.), The Blackwell Companion to Natural Theology (202–281). Blackwell.
- Colyvan, Mark, Jay L. Garfield, and Graham Priest (2005). Problems with the Argument From Fine Tuning. Synthese, 145(3), 325–338. https://doi.org/10.1007/s11229-005-6195-0
- Cox, R. T. (1946). Probability, Frequency and Reasonable Expectation. American Journal of Physics, 14(1), 1–13. https://doi.org/10.1119/1.1990764
- Craig, William Lane (2003). Design and the Anthropic Fine-Tuning of the Universe. In Neil A. Manson (Ed.), God and Design: The Teleological Argument and Modern Science (155–177). Routledge.
- De Simone, Andrea, Alan H. Guth, Andrei Linde, Mahdiyar Noorbala, Michael P. Salem, and Alexander Vilenkin (2010). Boltzmann Brains and the Scale-Factor Cutoff Measure of the Multiverse. Physical Review D, 82(6), 063520. https://doi.org/10.1103/PhysRevD.82.063520
- Dine, Michael (2015). Naturalness under Stress. Annual Review of Nuclear and Particle Science, 65(1), 43–62. https://doi.org/10.1146/annurev-nucl-102014-022053
- Draper, Paul (2005). God, Science and Naturalism. In W. J. Wainwright (Ed.), The Oxford Handbook of Philosophy of Religion (272–303). Oxford University Press.
- Ellis, George Francis Rayner and Joseph Silk (2014). Scientific Method: Defend the Integrity of Physics. Nature, 516(7531), 321–323. https://doi.org/10.1038/516321a
- Feynman, Richard P. (1985). QED: The Strange Theory of Light and Matter. Princeton University Press.
- Freivogel, Ben (2011). Making Predictions in the Multiverse. Classical and Quantum Gravity, 28(20), 204007. https://doi.org/10.1088/0264-9381/28/20/204007
- Garriga, Jaume and Alexander Vilenkin (2013). Watchers of the Multiverse. Journal of Cosmology and Astroparticle Physics, 2013(05), 037. https://doi.org/10.1088/1475-7516/2013/05/037
- Garriga, Jaume, Delia Schwartz-Perlov, Alexander Vilenkin, and Sergei Winitzki (2006). Probabilities in the Inflationary Multiverse. Journal of Cosmology and Astroparticle Physics, 2006(01), 017. "https://doi.org/10.1088/1475-7516/2006/01/017
- Gibbons, Gary W. and Neil Turok (2008). Measure Problem in Cosmology. Physical Review D, 77(6), 063516. https://doi.org/10.1103/PhysRevD.77.063516
- Glymour, C. (1980). Theory and Evidence. Princeton University Press.
- Grohs, Evan, Alex R. Howe, and Fred C. Adams (2018). Universes without the Weak Force: Astrophysical Processes with Stable Neutrons. Physical Review D, 97(4), 043003. https://doi.org/10.1103/PhysRevD.97.043003
- Halvorson, Hans (2018). A Theological Critique of the Fine-Tuning Argument. In Matthew A. Benton, John Hawthorne, and Dani Rabinowitz (Eds.), Knowledge, Belief, and God (122–135). Oxford University Press.
- Hawking, S. W. and Don N. Page (1988). How Probable is Inflation? Nuclear Physics B, 298(4), 789–809. https://doi.org/10.1016/0550-3213(88)90008-9
- Hawthorne, John and Yoaav Isaacs (2017). Misapprehensions about the Fine-Tuning Argument. Royal Institute of Philosophy Supplement, 81, 133–155. https://doi.org/10.1017/S1358246117000297
- Hawthorne, John and Yoaav Isaacs (2018). Fine-Tuning Fine-Tuning. In Matthew A. Benton, John Hawthorne, and Dani Rabinowitz (Eds.), Knowledge, Belief, and God (1–54). Oxford University Press.
- Hogan, Craig J. (2000). Why the Universe Is Just So. Reviews of Modern Physics, 72(4), 1149–1161. https://doi.org/10.1103/RevModPhys.72.1149
- Hoyle, Fred (1982). The Universe: Past and Present Reflections. Annual Review of Astronomy and Astrophysics, 20(1), 1–36. https://doi.org/10.1146/annurev.aa.20.090182.000245
- Ijjas, Anna, Paul J. Steinhardt, and Abraham Loeb (2017, February). Cosmic Inflation Theory Faces Challenges. Scientific American.
- Jaffe, Robert, Alejandro Jenkins, and Itamar Kimchi (2009). Quark Masses: An Environmental Impact Statement. Physical Review D, 79(6), 065014. https://doi.org/10.1103/PhysRevD.79.065014
- Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press.
- Kass, Robert E. and Larry Wasserman (1996). The Selection of Prior Distributions by Formal Rules. Journal of the American Statistical Association, 91(435), 1343. https://doi.org/10.2307/2291752
- Knuth, Kevin H. and John Skilling (2012). Foundations of Inference. Axioms, 1(3), 38–73. https://doi.org/10.3390/axioms1010038
- Kolmogorov, A. (1933). Foundations of the Theory of Probability. Julius Springer.
- Leslie, John (1989). Universes. Routledge.
- Lewis, G. F. and L. A. Barnes (2016). A Fortunate Universe: Life in a Finely Tuned Cosmos. Cambridge University Press.
- Linde, Andrei (2015). A Brief History of the Multiverse. ArXiv e-prints. Retrieved from http://arxiv.org/abs/1512.01203
- Manson, Neil A. (2018). How Not to Be Generous to Fine-Tuning Sceptics. Religious Studies. Advance online publication. https://doi.org/10.1017/S0034412518000586
- Mawson, T. J. (2011). Explaining the Fine Tuning of the Universe to Us and the Fine Tuning of Us to the Universe. Royal Institute of Philosophy Supplement, 68, 25–50. https://doi.org/10.1017/S1358246111000075
- McGrew, Timothy J., Lydia McGrew, and Eric Vestrup (2001). Probabilities and the Fine-Tuning Argument: A Sceptical View. Mind, 110(440), 1027–1038. https://doi.org/10.1093/mind/110.440.1027
- Monton, Bradley (2006). God, Fine-Tuning, and the Problem of Old Evidence. The British Journal for the Philosophy of Science, 57(2), 405–424. https://doi.org/10.1093/bjps/axl008
- Murayama, Hitoshi (2002). The Origin of Neutrino Mass. Physics World, 15(5), 35– 39. https://doi.org/10.1088/2058-7058/15/5/36
- Olum, Ken D. (2012). Is There Any Coherent Measure for Eternal Inflation? Physical Review D, 86(6), 1–6. https://doi.org/10.1103/PhysRevD.86.063509
- Page, Don N. (2008). Cosmological Measures without Volume Weighting. Journal of Cosmology and Astroparticle Physics, 2008(10), 025. https://doi.org/10.1088/1475-7516/2008/10/025
- Page, Don N. (2017). Bayes Keeps Boltzmann Brains at Bay. ArXiv e-prints. Retrieved from http://arxiv.org/abs/1708.00449
- Penrose, Roger (1979). Singularities and Time-Asymmetry. In S. W. Hawking and W. Israel (Eds.), General Relativity: An Einstein Centenary Survey (581–638) Penrose, Roger (1989). The Emperor’s New Mind. Vintage.
- Penrose, Roger (2004). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage.
- Perlmutter, S., G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Castro,. . . , The Supernova Cosmology Project (1999). Measurements of Ω and Λ from 42 High-Redshift Supernovae. The Astrophysical Journal, 517(2), 565–586. https://doi.org/10.1086/307221
- Pochet, T., J. M. Pearson, G. Beaudet, and H. Reeves (1991). The Binding of Light Nuclei, and the Anthropic Principle. Astronomy & Astrophysics, 243, 1–4.
- Ramsey, Frank P. (1926). Truth and Probability. In R. B. Braithwaite (Ed.), The Foundations of Mathematics and other Logical Essays (156–198). Kegan, Paul, Trench, Trubner.
- Riess, Adam G., Alexei V. Filippenko, Peter Challis, Alejandro Clocchiatti, Alan Diercks, Peter M. Garnavich, . . . , John Tonry (1998). Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant. The Astronomical Journal, 116(3), 1009–1038. https://doi.org/10.1086/300499
- Roberts, John T. (2011). Fine-Tuning and the Infrared Bull’s-Eye. Philosophical Studies, 160(2), 287–303. https://doi.org/10.1007/s11098-011-9719-0
- Schellekens, A. N. (2013). Life at the Interface of Particle Physics and String Theory. Reviews of Modern Physics, 85(4), 1491–1540. https://doi.org/10.1103/RevModPhys.85.1491
- Schilpp, P. (Ed.) (1969). Albert Einstein: Philosopher-Scientist. Open Court Press. Swinburne, R. (2004). The Existence of God. Oxford University Press.
- ’t Hooft, G (1980). Naturalness, Chiral Symmetry, and Spontaneous Chiral Symmetry Breaking. In G. ’t Hooft (Ed.), Recent Developments in Gauge Theories, Proceedings of 1979 Cargeese Institute (135–157). Plenum.
- Tegmark, Max (2005). What Does Inflation Really Predict? Journal of Cosmology and Astroparticle Physics, 2005(04), 001. https://doi.org/10.1088/1475-7516/2005/04/001
- Tegmark, Max, Anthony Aguirre, Martin J. Rees, and Frank Wilczek (2006). Dimensionless Constants, Cosmology, and Other Dark Matters. Physical Review D, 73(2), 023505. https://doi.org/10.1103/PhysRevD.73.023505
- Vilenkin, Alexander (1995). Making Predictions in an Eternally Inflating Universe. Physical Review D, 52(6), 3365–3374. https://doi.org/10.1103/PhysRevD.52.3365
- Vilenkin, Alexander (2007a). A Measure of the Multiverse. Journal of Physics A: Mathematical and Theoretical, 40(25), 6777–6785. https://doi.org/10.1088/1751-8113/40/25/S22
- Vilenkin, Alexander (2007b). Freak Observers and the Measure of the Multiverse. Journal of High Energy Physics, 2007(01), 092. https://doi.org/10.1088/1126-6708/2007/01/092
- Weinberg, Steven (1989). The Cosmological Constant Problem. Reviews of Modern Physics, 61(1), 1–23. https://doi.org/10.1103/RevModPhys.61.1
- Weisberg, Jonathan (2010). A Note on Design: What’s Fine-Tuning Got to Do with It? Analysis, 70(3), 431–438. https://doi.org/10.1093/analys/anq028
- Woit, P. (2007). Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Basic Books.

## Notes

As our presentation of Bayesian probability is somewhat different to the usual philosophical presentation, I’ve used the notation most familiar to physicists.

Climenhaga (2019) argues that the correct situation in which to apply Equation (1) is when \(A\) is explanatorily prior to \(B\). Dropping the chronological mandate avoids the “problem of old evidence." Glymour (1980) argues that, if we already know evidence \(E\), then \(p(E|T) = 1\) and \(p(E) = 1\), and thus \(p(T|E) = p(T)\). As discussed in Barnes (2018), this is not how to use Bayes theorem. Even if \(E\) is known, it is not taken as given in every probability we calculate. Every likelihood is the probability of some fact that we already know. Calculating the likelihood uses the same probability function that comes from Cox’s theorem; it does not require a new “ur-probability” function, generated by supposing “that one does not fully believe that E” (Monton 2006: 416). Bayesian degrees of plausibility/support aren’t about what any individual knows or believes; they are about what one proposition implies about the plausibility of another. Regarding the FTA, there is no need to argue in chronological order, taking life as background information and fine-tuning as new information (Roberts 2011).

Particle physicists sometimes list 26, including either the QCD vacuum phase or pilfering the cosmological constant from cosmology. The six cosmological parameters above aren’t the ones most familiar to cosmologists: \(\Omega_\Lambda\), \(\Omega_m\), etc., which are time dependent. I omit the Hubble parameter, which effectively specifies the amount of time since the Big Bang. It is neither a fundamental constant nor an initial condition. I also omit the scalar spectral index.

More generally, a power law distribution \(p(x) = (a+1)/x_{up} (x/x_{up})^a\) over the interval \([0,x_{up})\) with \(a > -1\) will not introduce another dimensionful parameter, but unless a is very close to -1, will not significantly affect cases of fine-tuning, and for all \(a>0\) will in fact strengthen them. In particular, a distribution which is flat in logarithmic space (\(a = -1\)) will not do, because the value zero is in the allowed range. With no lower cutoff at a positive value of the parameter, the distribution is non-normalisable: for all \(x' > 0\), \(\int_0^{x'} d(\log x) = \infty\).

Note an interesting possible exception: the scalar fluctuation amplitude (\(Q\)). Roughly speaking, this is the lumpiness of the early universe, measuring the size of typical deviations away from perfect uniformity. In our Universe, \(Q = 2 × 10^{-5}\). Looking beyond the standard model, inflationary theories typically predict a distribution of values for \(Q\). The shape of this distribution depends on the (largely unconstrained) parameters that characterise the inflaton field, but most values of \(Q\) fall within several orders of magnitude of unity (Tegmark 2005). However, \(Q\) is related to the initial low entropy of the Universe. Instead of a single parameter \(Q\) that assumes a near-uniform universe, we can more generally consider our universe’s volume in phase space. As has been argued by Penrose (1979; 1989; 2004), the early universe has low entropy because it is almost perfectly smooth. In thermodynamic terms, there is plenty of free energy available to be released, as gravity causes overdensities to grow, eventually forming galaxies and igniting stars. From statistical mechanics point of view, “low entropy” roughly translates as “exhibiting an extremely rare arrangement of its constituents.” Penrose (1989: 445) calculates that the probability of a value of \(Q\) as small as order unity is roughly one part in \(10^{{10}^{123}}\). Nevertheless, as this case raises thorny issues of initial conditions, infinities and entropy in cosmology, we will focus here on the fundamental constants.

For example, consider the class of power law distributions on \([0, 1]\): \(p(x) = (a+1) x^a\), for some constant \(a >0\) so that it decreases away from its peak at \(x = 1\). The probability of \(x\) being less than some limit \(x_1\) is \(p(<x_1) = x_1^{a+1} < x_1\).

The naturalist cannot complain about the probability that God would create a universe at all, given that God exists. There is no reason for the universe to exist at all on naturalism either, so this particular likelihood duel is at best a draw.

Recall that in our notation, this sum represents or/union/disjunction.

In equations, if \(p(E|T_1B) = p(E|T_2B)\), then \(p(T_1|EB)/p(T_2|EB) = p(T_1|B)/p(T_2|B)\).