SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The Continuum Hypothesis

The continuum hypothesis (CH) is one of the most central open problems in set theory, one that is important for both mathematical and philosophical reasons.

The problem actually arose with the birth of set theory; indeed, in many respects it stimulated the birth of set theory. In 1874 Cantor had shown that there is a one-to-one correspondence between the natural numbers and the algebraic numbers. More surprisingly, he showed that there is no one-to-one correspondence between the natural numbers and the real numbers. Taking the existence of a one-to-one correspondence as a criterion for when two sets have the same size (something he certainly did by 1878), this result shows that there is more than one level of infinity and thus gave birth to the higher infinite in mathematics. Cantor immediately tried to determine whether there were any infinite sets of real numbers that were of intermediate size, that is, whether there was an infinite set of real numbers that could not be put into one-to-one correspondence with the natural numbers and could not be put into one-to-one correspondence with the real numbers. The continuum hypothesis (under one formulation) is simply the statement that there is no such set of real numbers. It was through his attempt to prove this hypothesis that led Cantor do develop set theory into a sophisticated branch of mathematics. [ 1 ]

Despite his efforts Cantor could not resolve CH. The problem persisted and was considered so important by Hilbert that he placed it first on his famous list of open problems to be faced by the 20 th century. Hilbert also struggled to resolve CH, again without success. Ultimately, this lack of progress was explained by the combined results of Gödel and Cohen, which together showed that CH cannot be resolved on the basis of the axioms that mathematicians were employing; in modern terms, CH is independent of Zermelo-Fraenkel set theory extended with the Axiom of Choice (ZFC).

This independence result was quickly followed by many others. The independence techniques were so powerful that set theorists soon found themselves preoccupied with the meta-theoretic enterprise of proving that certain fundamental statements could not be proved or refuted within ZFC. The question then arose as to whether there were ways to settle the independent statements. The community of mathematicians and philosophers of mathematics was largely divided on this question. The pluralists (like Cohen) maintained that the independence results effectively settled the question by showing that it had no answer . On this view, one could adopt a system in which, say CH was an axiom and one could adopt a system in which ¬CH was an axiom and that was the end of the matter—there was no question as to which of two incompatible extensions was the “correct” one. The non-pluralists (like Gödel) held that the independence results merely indicated the paucity of our means for circumscribing mathematical truth. On this view, what was needed were new axioms, axioms that are both justified and sufficient for the task. Gödel actually went further in proposing candidates for new axioms—large cardinal axioms—and he conjectured that they would settle CH.

Gödel's program for large cardinal axioms proved to be remarkably successful. Over the course of the next 30 years it was shown that large cardinal axioms settle many of the questions that were shown to be independent during the era of independence. However, CH was left untouched. The situation turned out to be rather ironic since in the end it was shown (in a sense that can be made precise) that although the standard large cardinal axioms effectively settle all question of complexity strictly below that of CH, they cannot (by results of Levy and Solovay and others) settle CH itself. Thus, in choosing CH as a test case for his program, Gödel put his finger precisely on the point where it fails. It is for this reason that CH continues to play a central role in the search for new axioms.

In this entry we shall give an overview of the major approaches to settling CH and we shall discuss some of the major foundational frameworks which maintain that CH does not have an answer. The subject is a large one and we have had to sacrifice full comprehensiveness in two dimensions. First, we have not been able to discuss the major philosophical issues that are lying in the background. For this the reader is directed to the entry “ Large Cardinals and Determinacy ”, which contains a general discussion of the independence results, the nature of axioms, the nature of justification, and the successes of large cardinal axioms in the realm “below CH”. Second, we have not been able to discuss every approach to CH that is in the literature. Instead we have restricted ourselves to those approaches that appear most promising from a philosophical point of view and where the mathematics has been developed to a sufficiently advanced state. In the approaches we shall discuss—forcing axioms, inner model theory, quasi-large cardinals—the mathematics has been pressed to a very advanced stage over the course of 40 years. And this has made our task somewhat difficult. We have tried to keep the discussion as accessible as possible and we have placed the more technical items in the endnotes. But the reader should bear in mind that we are presenting a bird's eye view and that for a higher resolution at any point the reader should dip into the suggested readings that appear at the end of each section. [ 2 ]

There are really two kinds of approaches to new axioms—the local approach and the global approach. On the local approach one seeks axioms that answer questions concerning a specifiable fragment of the universe, such as V ω+1 or V ω+2 , where CH lies. On the global approach one seeks axioms that attempt to illuminate the entire structure of the universe of sets. The global approach is clearly much more challenging. In this entry we shall start with the local approach and toward the end we shall briefly touch upon the global approach.

Here is an overview of the entry: Section 1 surveys the independence results in cardinal arithmetic, covering both the case of regular cardinals (where CH lies) and singular cardinals. Section 2 considers approaches to CH where one successively verifies a hierarchy of approximations to CH, each of which is an “effective” version of CH. This approach led to the remarkable discovery of Woodin that it is possible (in the presence of large cardinals) to have an effective failure of CH, thereby showing, that the effective failure of CH is as intractable (with respect to large cardinal axioms) as CH itself. Section 3 continues with the developments that stemmed from this discovery. The centerpiece of the discussion is the discovery of a “canonical” model in which CH fails. This formed the basis of a network of results that was collectively presented by Woodin as a case for the failure of CH. To present this case in the most streamlined form we introduce the strong logic Ω-logic. Section 4 takes up the competing foundational view that there is no solution to CH. This view is sharpened in terms of the generic multiverse conception of truth and that view is then scrutinized. Section 5 continues the assessment of the case for ¬CH by investigating a parallel case for CH. In the remaining two sections we turn to the global approach to new axioms and here we shall be much briefer. Section 6 discusses the approach through inner model theory. Section 7 discusses the approach through quasi-large cardinal axioms.

1.1 Regular Cardinals

1.2 singular cardinals, 2.1 three versions, 2.2 the foreman-magidor program, 3.1 ℙ max, 3.2 ω-logic, 3.3 the case, 4.1 broad multiverse views, 4.2 the generic multiverse, 4.3 the ω conjecture and the generic multiverse, 4.4 is there a way out, 5.1 the case for ¬ch, 5.2 the parallel case for ch, 5.3 assessment.

  • 6 The Ultimate Inner Model
  • 7 The Structure Theory of L ( V λ+1 )

Other Internet Resources

Related entries, 1. independence in cardinal arithmetic.

In this section we shall discuss the independence results in cardinal arithmetic. First, we shall treat of the case of regular cardinals, where CH lies and where very little is determined in the context of ZFC. Second, for the sake of comprehensiveness, we shall discuss the case of singular cardinals, where much more can be established in the context of ZFC.

The addition and multiplication of infinite cardinal numbers is trivial: For infinite cardinals κ and λ,

κ + λ = κ ⋅ λ = max{κ,λ}.

The situation becomes interesting when one turns to exponentiation and the attempt to compute κ λ for infinite cardinals.

During the dawn of set theory Cantor showed that for every cardinal κ,

2 κ > κ.

There is no mystery about the size of 2 n for finite n . The first natural question then is where 2 ℵ 0 is located in the aleph-hierarchy: Is it ℵ 1 , ℵ 2 , …, ℵ 17 or something much larger?

The cardinal 2 ℵ 0 is important since it is the size of the continuum (the set of real numbers). Cantor's famous continuum hypothesis (CH) is the statement that 2 ℵ 0 = ℵ 1 . This is a special case of the generalized continuum hypothesis (GCH) which asserts that for all α, 2 ℵ α = ℵ α+1 . One virtue of GCH is that it gives a complete solution to the problem of computing κ λ for infinite cardinals: Assuming GCH, if κ ≤ λ then κ λ = λ + ; if cf(κ) ≤ λ ≤ κ then κ λ = κ + ; and if λ < cf(κ) then κ λ = κ.

Very little progress was made on CH and GCH. In fact, in the early era of set theory the only other piece of progress beyond Cantor's result that 2 κ > κ (and the trivial result that if κ ≤ λ then 2 κ ≤ 2 λ ) was König's result that cf(2 κ ) > κ. The explanation for the lack of progress was provided by the independence results in set theory:

To prove this Gödel invented the method of inner models —he showed that CH and GCH held in the minimal inner model L of ZFC. Cohen then complemented this result:

He did this by inventing the method of outer models and showing that CH failed in a generic extension V B of V . The combined results of Gödel and Cohen thus demonstrate that assuming the consistency of ZFC, it is in principle impossible to settle either CH or GCH in ZFC.

In the Fall of 1963 Easton completed the picture by showing that for infinite regular cardinals κ the only constraints on the function κ ↦ 2 κ that are provable in ZFC are the trivial constraint and the results of Cantor and König:

  • if κ ≤ λ then F (κ) ≤ F (λ) ,
  • F (κ) > κ , and
  • cf( F (κ)) > κ .

Thus, set theorists had pushed the cardinal arithmetic of regular cardinals as far as it could be pushed within the confines of ZFC.

The case of cardinal arithmetic on singular cardinals is much more subtle. For the sake of completeness we pause to briefly discuss this before proceeding with the continuum hypothesis.

It was generally believed that, as in the case for regular cardinals, the behaviour of the function κ ↦ 2 κ would be relatively unconstrained within the setting of ZFC. But then Silver proved the following remarkable result: [ 3 ]

It turns out that (by a deep result of Magidor, published in 1977) GCH can first fail at ℵ ω (assuming the consistency of a supercompact cardinal). Silver's theorem shows that it cannot first fail at ℵ ω 1 and this is provable in ZFC.

This raises the question of whether one can “control” the size of 2 ℵ δ with a weaker assumption than that ℵ δ is a singular cardinal of uncountable cofinality such that GCH holds below ℵ δ . The natural hypothesis to consider is that ℵ δ is a singular cardinal of uncountable cofinality which is a strong limit cardinal , that is, that for all α < ℵ δ , 2 α < ℵ δ . In 1975 Galvin and Hajnal proved (among other things) that under this weaker assumption there is indeed a bound:

2 ℵ δ < ℵ (|δ| cf(δ) ) + .

It is possible that there is a jump—in fact, Woodin showed (again assuming large cardinals) that it is possible that for all κ, 2 κ = κ ++ . What the above theorem shows is that in ZFC there is a provable bound on how big the jump can be.

The next question is whether a similar situation prevails with singular cardinals of countable cofinality. In 1978 Shelah showed that this is indeed the case. To fix ideas let us concentrate on ℵ ω .

2 ℵ ω < ℵ (2 ℵ 0 ) + .

One drawback of this result is that the bound is sensitive to the actual size of 2 ℵ 0 , which can be anything below ℵ ω . Remarkably Shelah was later able to remedy this with the development of his pcf (possible cofinalities) theory. One very quotable result from this theory is the following:

2 ℵ ω < ℵ ω 4 .

In summary, although the continuum function at regular cardinals is relatively unconstrained in ZFC, the continuum function at singular cardinals is (provably in ZFC) constrained in significant ways by the behaviour of the continuum function on the smaller cardinals.

Further Reading : For more cardinal arithmetic see Jech (2003). For more on the case of singular cardinals and pcf theory see Abraham & Magidor (2010) and Holz, Steffens & Weitz (1999).

2. Definable Versions of the Continuum Hypothesis and its Negation

Let us return to the continuum function on regular cardinals and concentrate on the simplest case, the size of 2 ℵ 0 . One of Cantor's original approaches to CH was by investigating “simple” sets of real numbers (see Hallett (1984), pp. 3–5 and §2.3(b)). One of the first results in this direction is the Cantor-Bendixson theorem that every infinite closed set is either countable or contains a perfect subset, in which case it has the same cardinality as the set of reals. In other words, CH holds (in this formulation) when one restricts one's attention to closed sets of reals. In general, questions about “definable” sets of reals are more tractable than questions about arbitrary sets of reals and this suggests looking at definable versions of the continuum hypothesis.

There are three different formulations of the continuum hypothesis—the interpolant version, the well-ordering version, and the surjection version. These versions are all equivalent to one another in ZFC but we shall be imposing a definability constraint and in this case there can be interesting differences (our discussion follows Martin (1976)). There is really a hierarchy of notions of definability—ranging up through the Borel hierarchy, the projective hierarchy, the hierarchy in L (ℝ), and, more generally, the hierarchy of universally Baire sets—and so each of these three general versions is really a hierarchy of versions, each corresponding to a given level of the hierarchy of definability (for a discussion of the hierarchy of definability see §2.2.1 and §4.6 of the entry “ Large Cardinals and Determinacy ”).

2.1.1 Interpolant Version

The first formulation of CH is that there is no interpolant , that is, there is no infinite set A of real numbers such that the cardinality of A is strictly between that of the natural numbers and the real numbers. To obtain definable versions one simply asserts that there is no “definable” interpolant and this leads to a hierarchy of definable interpolant versions, depending on which notion of definability one employs. More precisely, for a given pointclass Γ in the hierarchy of definable sets of reals, the corresponding definable interpolant version of CH asserts that there is no interpolant in Γ.

The Cantor-Bendixson theorem shows that there is no interpolant in Γ in the case where Γ is the pointclass of closed sets, thus verifying this version of CH. This was improved by Suslin who showed that this version of CH holds for Γ where Γ is the class of Σ̰ 1 1 sets. One cannot go much further within ZFC—to prove stronger versions one must bring in stronger assumptions. It turns out that axioms of definable determinacy and large cardinal axioms achieve this. For example, results of Kechris and Martin show that if Δ̰ 1 n -determinacy holds then this version of CH holds for the pointclass of Σ̰ 1 n+1 sets. Going further, if one assumes AD L (ℝ) then this version of CH holds for all sets of real numbers appearing in L (ℝ). Since these hypotheses follow from large cardinal axioms one also has that stronger and stronger large cardinal assumptions secure stronger and stronger versions of this version of the effective continuum hypothesis. Indeed large cardinal axioms imply that this version of CH holds for all sets of reals in the definability hierarchy we are considering; more precisely, if there is a proper class of Woodin cardinals then this version of CH holds for all universally Baire sets of reals.

2.1.2 Well-ordering Version

The second formulation of CH asserts that every well-ordering of the reals has order type less than ℵ 2 . For a given pointclass Γ in the hierarchy, the corresponding definable well-ordering version of CH asserts that every well-ordering (coded by a set) in Γ has order type less than ℵ 2 .

Again, axioms of definable determinacy and large cardinal axioms imply this version of CH for richer notions of definability. For example, if AD L (ℝ) holds then this version of CH holds for all sets of real numbers in L (ℝ). And if there is a proper class of Woodin cardinals then this version of CH holds for all universally Baire sets of reals.

2.1.3 Surjection Version

The third version formulation of CH asserts that there is no surjection ρ : ℝ → ℵ 2 , or, equivalently, that there is no prewellordering of ℝ of length ℵ 2 . For a given pointclass Γ in the hierarchy of definability, the corresponding surjection version of CH asserts that there is no surjection ρ : ℝ → ℵ 2 such that (the code for) ρ is in Γ.

Here the situation is more interesting. Axioms of definable determinacy and large cardinal axioms have bearing on this version since they place bounds on how long definable prewellorderings can be. Let δ̰ 1 n be the supremum of the lengths of the Σ̰ 1 n -prewellorderings of reals and let Θ L (ℝ) be the supremum of the lengths of prewellorderings of reals where the prewellordering is definable in the sense of being in L (ℝ). It is a classical result that δ̰ 1 1 = ℵ 1 . Martin showed that δ̰ 1 2 ≤ ℵ 2 and that if there is a measurable cardinal then δ̰ 1 3 ≤ ℵ 3 . Kunen and Martin also showed under PD, δ̰ 1 4 ≤ ℵ 4 and Jackson showed that under PD, for each n < ω, δ̰ 1 n < ℵ ω . Thus, assuming that there are infinitely many Woodin cardinals, these bounds hold. Moreover, the bounds continue to hold regardless of the size of 2 ℵ 0 . Of course, the question is whether these bounds can be improved to show that the prewellorderings are shorter than ℵ 2 . In 1986 Foreman and Magidor initiated a program to establish this. In the most general form they aimed to show that large cardinal axioms implied that this version of CH held for all universally Baire sets of reals.

2.1.4 Potential Bearing on CH

Notice that in the context of ZFC, these three hierarchies of versions of CH are all successive approximations of CH and in the limit case, where Γ is the pointclass of all sets of reals, they are equivalent to CH. The question is whether these approximations can provide any insight into CH itself.

There is an asymmetry that was pointed out by Martin, namely, that a definable counterexample to CH is a real counterexample, while no matter how far one proceeds in verifying definable versions of CH at no stage will one have touched CH itself. In other words, the definability approach could refute CH but it could not prove it.

Still, one might argue that although the definability approach could not prove CH it might provide some evidence for it. In the case of the first two versions we now know that CH holds for all definable sets. Does this provide evidence of CH? Martin pointed out (before the full results were known) that this is highly doubtful since in each case one is dealing with sets that are atypical. For example, in the first version, at each stage one secures the definable version of CH by showing that all sets in the definability class have the perfect set property; yet such sets are atypical in that assuming AC it is easy to show that there are sets without this property. In the second version, at each stage one actually shows not only that each well-ordering of reals in the definability class has ordertype less than ℵ 2 , but also that it has ordertype less than ℵ 1 . So neither of these versions really illuminates CH.

The third version actually has an advantage in this regard since not all of the sets it deals with are atypical. For example, while all Σ̰ 1 1 -sets have length less than ℵ 1 , there are Π̰ 1 1 -sets of length ℵ 1 . Of course, it could turn out that even if the Foreman-Magidor program were to succeed the sets could turn out to be atypical in another sense, in which case it would shed little light on CH. More interesting, however, is the possibility that in contrast to the first two versions, it would actually provide an actual counterexample to CH. This, of course, would require the failure of the Foreman-Magidor program.

The goal of the Foreman-Magidor program was to show that large cardinal axioms also implied that the third version of CH held for all sets in L (ℝ) and, more generally, all universally Baire sets. In other words, the goal was to show that large cardinal axioms implied that Θ L (ℝ) ≤ ℵ 2 and, more generally, that Θ L (A,ℝ) ≤ ℵ 2 for each universally Baire set A .

The motivation came from the celebrated results of Foreman, Magidor and Shelah on Martin's Maximum (MM), which showed that assuming large cardinal axioms one can always force to obtain a precipitous ideal on ℵ 2 without collapsing ℵ 2 (see Foreman, Magidor & Shelah (1988)). The program involved a two-part strategy:

  • Strengthen this result to show that assuming large cardinal axioms one can always force to obtain a saturated ideal on ℵ 2 without collapsing ℵ 2 .
  • Show that the existence of such a saturated ideal implies that Θ L (ℝ) ≤ ℵ 2 and, more generally that Θ L (A,ℝ) ≤ ℵ 2 for every universally Baire set A .

This would show that show that Θ L (ℝ) ≤ ℵ 2 and, more generally that Θ L (A,ℝ) ≤ ℵ 2 for every universally Baire set A . [ 4 ]

In December 1991, the following result dashed the hopes of this program.

The point is that the hypothesis of this theorem can always be forced assuming large cardinals. Thus, it is possible to have Θ L (ℝ) > ℵ 2 (in fact, δ̰ 1 3 > ℵ 2 ).

Where did the program go wrong? Foreman and Magidor had an approximation to (B) and in the end it turned out that (B) is true.

So the trouble is with (A).

This illustrates an interesting contrast between our three versions of the effective continuum hypothesis, namely, that they can come apart. For while large cardinals rule out definable counterexamples of the first two kinds, they cannot rule out definable counterexamples of the third kind. But again we must stress that they cannot prove that there are such counterexamples.

But there is an important point: Assuming large cardinal axioms (AD L (ℝ) suffices), although one can produce outer models in which δ̰ 1 3 > ℵ 2 it is not currently known how to produce outer models in which δ̰ 1 3 > ℵ 3 or even Θ L (ℝ) > ℵ 3 . Thus it is an open possibility that from ZFC +AD L (ℝ) one can prove Θ L (ℝ) ≤ ℵ 3 . Were this to be the case, it would follow that although large cardinals cannot rule out the definable failure of CH they can rule out the definable failure of 2 ℵ 0 = ℵ 2 . This could provide some insight into the size of the continuum, underscoring the centrality of ℵ 2 .

Further Reading : For more on the three effective versions of CH see Martin (1976); for more on the Foreman-Magidor program see Foreman & Magidor (1995) and the introduction to Woodin (1999).

3. The Case for ¬CH

The above results led Woodin to the identification of a “canonical” model in which CH fails and this formed the basis of his an argument that CH is false. In Section 3.1 we will describe the model and in the remainder of the section we will present the case for the failure of CH. In Section 3.2 we will introduce Ω-logic and the other notions needed to make the case. In Section 3.3 we will present the case.

The goal is to find a model in which CH is false and which is canonical in the sense that its theory cannot be altered by set forcing in the presence of large cardinals. The background motivation is this: First, we know that in the presence of large cardinal axioms the theory of second-order arithmetic and even the entire theory of L (ℝ) is invariant under set forcing. The importance of this is that it demonstrates that our main independence techniques cannot be used to establish the independence of questions about second-order arithmetic (or about L (ℝ)) in the presence of large cardinals. Second, experience has shown that the large cardinal axioms in question seem to answer all of the major known open problems about second-order arithmetic and L (ℝ) and the set forcing invariance theorems give precise content to the claim that these axioms are “effectively complete”. [ 5 ]

It follows that if ℙ is any homogeneous partial order in L (ℝ) then the generic extension L (ℝ) ℙ inherits the generic absoluteness of L (ℝ). Woodin discovered that there is a very special partial order ℙ max that has this feature. Moreover, the model L (ℝ) ℙ max satisfies ZFC + ¬CH. The key feature of this model is that it is “maximal” (or “saturated”) with respect to sentences that are of a certain complexity and which can be shown to be consistent via set forcing over the model; in other words, if these sentences can hold (by set forcing over the model) then they do hold in the model. To state this more precisely we are going to have to introduce a few rather technical notions.

There are two ways of stratifying the universe of sets. The first is in terms of ⟨ V α | α ∈ On ⟩, the second is in terms of ⟨ H (κ) | κ ∈ Card⟩, where H (κ) is the set of all sets which have cardinality less than κ and whose members have cardinality less than κ, and whose members of members have cardinality less than κ, and so on. For example, H (ω) = V ω and the theories of the structures H (ω 1 ) and V ω+1 are mutually interpretable. This latter structure is the structure of second-order arithmetic and, as mentioned above, large cardinal axioms give us an “effectively complete” understanding of this structure. We should like to be in the same position with regard to larger and larger fragments of the universe and the question is whether we should proceed in terms of the first or the second stratification.

The second stratification is potentially more fine-grained. Assuming CH one has that the theories of H (ω 2 ) and V ω+2 are mutually interpretable and assuming larger and larger fragments of GCH this correspondence continues upward. But if CH is false then the structure H (ω 2 ) is less rich than the structure V ω 2 . In this event the latter structure captures full third-order arithmetic, while the former captures only a small fragment of third-order arithmetic but is nevertheless rich enough to express CH. Given this, in attempting to understand the universe of sets by working up through it level by level, it is sensible to use the potentially more fine-grained stratification.

Our next step is therefore to understand H (ω 2 ). It actually turns out that we will be able to understand slightly more and this is somewhat technical. We will be concerned with the structure ⟨ H (ω 2 ), ∈, I NS , A G ⟩ ⊧ φ, where I NS is the non-stationary ideal on ω 1 and A G is the interpretation of (the canonical representation of) a set of reals A in L (ℝ). The details will not be important and the reader is asked to just think of H (ω 2 ) along with some “extra stuff” and not worry about the details concerning the extra stuff. [ 6 ]

We are now in a position to state the main result:

⟨ H (ω 2 ), ∈, I NS , A G ⟩ ⊧ φ
L (ℝ) ℙ max ⊧ “⟨ H (ω 2 ), ∈, I NS , A⟩ ⊧ φ”.

There are two key points: First, the theory of L (ℝ) ℙ max is “effectively complete” in the sense that it is invariant under set forcing. Second, the model L (ℝ) ℙ max is “maximal” (or “saturated”) in the sense that it satisfies all Π 2 -sentences (about the relevant structure) that can possibly hold (in the sense that they can be shown to be consistent by set forcing over the model).

One would like to get a handle on the theory of this structure by axiomatizing it. The relevant axiom is the following:

Finally, this axiom settles CH:

We will now recast the above results in terms of a strong logic. We shall make full use of large cardinal axioms and in this setting we are interested in logics that are “well-behaved” in the sense that the question of what implies what is not radically independent. For example, it is well known that CH is expressible in full second-order logic. It follows that in the presence of large cardinals one can always use set forcing to flip the truth-value of a purported logical validity of full second-order logic. However, there are strong logics—like ω-logic and β-logic—that do not have this feature—they are well-behaved in the sense that in the presence of large cardinal axioms the question of what implies what cannot be altered by set forcing. We shall introduce a very strong logic that has this feature—Ω-logic. In fact, the logic we shall introduce can be characterized as the strongest logic with this feature (see Koellner (2010) for further discussion of strong logics and for a precise statement of this result).

3.2.1 Ω-logic

T ⊧ Ω φ
if V B α ⊧ T then V B α ⊧ φ.

We say that a statement φ is Ω- satisfiable if there exists an ordinal α and a complete Boolean algebra B such that V B α ⊧ φ, and we say that φ is Ω- valid if ∅ ⊧ Ω φ. So, the above theorem says that (under our background assumptions), the statement “φ is Ω-satisfiable” is generically invariant and in terms of Ω-validity this is simply the following:

T ⊧ Ω φ iff V B ⊧ “T ⊧ Ω φ.”

Thus this logic is robust in that the question of what implies what is invariant under set forcing.

3.2.2 The Ω Conjecture

Corresponding to the semantic relation ⊧ Ω there is a quasi-syntactic proof relation ⊢ Ω . The “proofs” are certain robust sets of reals (universally Baire sets of reals) and the test structures are models that are “closed” under these proofs. The precise notions of “closure” and “proof” are somewhat technical and so we will pass over them in silence. [ 7 ]

Like the semantic relation, this quasi-syntactic proof relation is robust under large cardinal assumptions:

T ⊢ Ω φ iff V B ⊧ ‘T ⊢ Ω φ’.

Thus, we have a semantic consequence relation and a quasi-syntactic proof relation, both of which are robust under the assumption of large cardinal axioms. It is natural to ask whether the soundness and completeness theorems hold for these relations. The soundness theorem is known to hold:

It is open whether the corresponding completeness theorem holds. The Ω Conjecture is simply the assertion that it does:

∅ ⊧ Ω φ iff ∅ ⊢ Ω φ.

We will need a strong form of this conjecture which we shall call the Strong Ω Conjecture. It is somewhat technical and so we will pass over it in silence. [ 8 ]

3.2.3 Ω-Complete Theories

Recall that one key virtue of large cardinal axioms is that they “effectively settle” the theory of second-order arithmetic (and, in fact, the theory of L (ℝ) and more) in the sense that in the presence of large cardinals one cannot use the method of set forcing to establish independence with respect to statements about L (ℝ). This notion of invariance under set forcing played a key role in Section 3.1 . We can now rephrase this notion in terms of Ω-logic.

The invariance of the theory of L (ℝ) under set forcing can now be rephrased as follows:

Unfortunately, it follows from a series of results originating with work of Levy and Solovay that traditional large cardinal axioms do not yield Ω-complete theories at the level of Σ 2 1 since one can always use a “small” (and hence large cardinal preserving) forcing to alter the truth-value of CH.

Nevertheless, if one supplements large cardinal axioms then Ω-complete theories are forthcoming. This is the centerpiece of the case against CH.

  • ZFC + A is Ω -satisfiable and
  • ZFC + A is Ω -complete for the structure H (ω 2 ) .
ZFC + A ⊧ Ω ‘ H (ω 2 ) ⊧ ¬CH’.

Let us rephrase this as follows: For each A satisfying (1), let

T A = {φ | ZFC + A ⊧ Ω ‘ H (ω 2 ) ⊧ ¬φ’}.

The theorem says that if there is a proper class of Woodin cardinals and the Ω Conjecture holds, then there are (non-trivial) Ω-complete theories T A of H (ω 2 ) and all such theories contain ¬CH.

It is natural to ask whether there is greater agreement among the Ω-complete theories T A . Ideally, there would be just one. A recent result (building on Theorem 5.5) shows that if there is one such theory then there are many such theories.

 i.  ZFC + A is Ω -satisfiable and ii.  ZFC + A is Ω -complete for the structure H (ω 2 ) .
 i′.  ZFC + B is Ω -satisfiable and ii′.  ZFC + B is Ω -complete for the structure H (ω 2 )

How then shall one select from among these theories? Woodin's work in this area goes a good deal beyond Theorem 5.1. In addition to isolating an axiom that satisfies (1) of Theorem 5.1 (assuming Ω-satisfiability), he isolates a very special such axiom, namely, the axiom (∗) (“star”) mentioned earlier.

This axiom can be phrased in terms of (the provability notion of) Ω-logic:

  • (∗) .
⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩
ZFC + “⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩ ⊧ φ”
⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩ ⊧ φ.

It follows that of the various theories T A involved in Theorem 5.1, there is one that stands out: The theory T (∗) given by (∗). This theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩.

The continuum hypothesis fails in this theory. Moreover, in the maximal theory T (∗) given by (∗) the size of the continuum is ℵ 2 . [ 9 ]

To summarize: Assuming the Strong Ω Conjecture, there is a “good” theory of H (ω 2 ) and all such theories imply that CH fails. Moreover, (again, assuming the Strong Ω Conjecture) there is a maximal such theory and in that theory 2 ℵ 0 = ℵ 2 .

Further Reading : For the mathematics concerning ℙ max see Woodin (1999). For an introduction to Ω-logic see Bagaria, Castells & Larson (2006). For more on incompatible Ω-complete theories see Koellner & Woodin (2009). For more on the case against CH see Woodin (2001a,b, 2005a,b).

4. The Multiverse

The above case for the failure of CH is the strongest known local case for axioms that settle CH. In this section and the next we will switch sides and consider the pluralist arguments to the effect that CH does not have an answer (in this section) and to the effect that there is an equally good case for CH (in the next section). In the final two section we will investigate optimistic global scenarios that provide hope of settling the issue.

The pluralist maintains that the independence results effectively settle the undecided questions by showing that they have no answer. One way of providing a foundational framework for such a view is in terms of the multiverse. On this view there is not a single universe of set theory but rather a multiverse of legitimate candidates, some of which may be preferable to others for certain purposes but none of which can be said to be the “true” universe. The multiverse conception of truth is the view that a statement of set theory can only be said to be true simpliciter if it is true in all universes of the multiverse. For the purposes of this discussion we shall say that a statement is indeterminate according to the multiverse conception if it is neither true nor false according to the multiverse conception. How radical such a view is depends on the breadth of the conception of the multiverse.

The pluralist is generally a non-pluralist about certain domains of mathematics. For example, a strict finitist might be a non-pluralist about PA but a pluralist about set theory and one might be a non-pluralist about ZFC and a pluralist about large cardinal axioms and statements like CH.

There is a form of radical pluralism which advocates pluralism concerning all domains of mathematics. On this view any consistent theory is a legitimate candidate and the corresponding models of such theories are legitimate candidates for the domain of mathematics. Let us call this the broadest multiverse view. There is a difficulty in articulating this view, which may be brought out as follows: To begin with, one must pick a background theory in which to discuss the various models and this leads to a difficult. For example, according to the broad multiverse conception, since PA cannot prove Con(PA) (by the second incompleteness theorem, assuming that PA is consistent) there are models of PA + ¬Con(PA) and these models are legitimate candidates, that is, they are universes within the broad multiverse. Now to arrive at this conclusion one must (in the background theory) be in a position to prove Con(PA) (since this assumption is required to apply the second incompleteness theorem in this particular case). Thus, from the perspective of the background theory used to argue that the above models are legitimate candidates, the models in question satisfy a false Σ 0 1 -sentence, namely, ¬Con(PA). In short, there is a lack of harmony between what is held at the meta-level and what is held at the object-level.

The only way out of this difficulty would seem to be to regard each viewpoint—each articulation of the multiverse conception—as provisional and, when pressed, embrace pluralism concerning the background theory. In other words, one would have to adopt a multiverse conception of the multiverse, a multiverse conception of the multiverse conception of the multiverse, and so on, off to infinity. It follows that such a position can never be fully articulated—each time one attempts to articulate the broad multiverse conception one must employ a background theory but since one is a pluralist about that background theory this pass at using the broad multiverse to articulate the conception does not do the conception full justice. The position is thus difficult to articulate. One can certainly take the pluralist stance and try to gesture toward or exhibit the view that one intends by provisionally settling on a particular background theory but then advocate pluralism regarding that when pressed. The view is thus something of a “moving target”. We shall pass over this view in silence and concentrate on views that can be articulated within a foundational framework.

We will accordingly look at views which embrace non-pluralism with regard to a given stretch of mathematics and for reasons of space and because this is an entry on set theory we will pass over the long debates concerning strict finitism, finitism, predicativism, and start with views that embrace non-pluralism regarding ZFC.

Let the broad multiverse (based on ZFC) be the collection of all models of ZFC. The broad multiverse conception of truth (based on ZFC) is then simply the view that a statement of set theory is true simpliciter if it is provable in ZFC. On this view the statement Con(ZFC) and other undecided Π 0 1 -statements are classified as indeterminate. This view thus faces a difficulty parallel to the one mentioned above concerning radical pluralism.

This motivates the shift to views that narrow the class of universes in the multiverse by employing a strong logic. For example, one can restrict to universes that are ω-models, β-models (i.e., wellfounded), etc. On the view where one takes ω-models, the statement Con(ZFC) is classified as true (though this is sensitive to the background theory) but the statement PM (all projective sets are Lebesgue measurable) is classified as indeterminate.

For those who are convinced by the arguments (surveyed in the entry “ Large Cardinals and Determinacy ”) for large cardinal axioms and axioms of definable determinacy, even these multiverse conceptions are too weak. We will follow this route. For the rest of this entry we will embrace non-pluralism concerning large cardinal axioms and axioms of definable determinacy and focus on the question of CH.

The motivation behind the generic multiverse is to grant the case for large cardinal axioms and definable determinacy but deny that statements such as CH have a determinate truth value. To be specific about the background theory let us take ZFC + “There is a proper class of Woodin cardinals” and recall that this large cardinal assumption secures axioms of definable determinacy such as PD and AD L (ℝ) .

Let the generic multiverse 𝕍 be the result of closing V under generic extensions and generic refinements. One way to formalize this is by taking an external vantage point and start with a countable transitive model M . The generic multiverse based on M is then the smallest set 𝕍 M such that M ∈ 𝕍 M and, for each pair of countable transitive models ( N , N [ G ]) such that N ⊧ ZFC and G ⊆ ℙ is N -generic for some partial order in ℙ ∈ N , if either N or N [ G ] is in 𝕍 M then both N and N [ G ] are in 𝕍 M .

Let the generic multiverse conception of truth be the view that a statement is true simpliciter iff it is true in all universes of the generic multiverse. We will call such a statement a generic multiverse truth . A statement is said to be indeterminate according to the generic multiverse conception iff it is neither true nor false according to the generic multiverse conception. For example, granting our large cardinal assumptions, such a view deems PM (and PD and AD L (ℝ) ) true but deems CH indeterminate.

Is the generic multiverse conception of truth tenable? The answer to this question is closely related to the subject of Ω-logic. The basic connection between generic multiverse truth and Ω-logic is embodied in the following theorem:

  • φ is a generic multiverse truth.
  • φ is Ω -valid.

Now, recall that by Theorem 3.5, under our background assumptions, Ω-validity is generically invariant. It follows that given our background theory, the notion of generic multiverse truth is robust with respect to Π 2 -statements. In particular, for Π 2 -statements, the statement “φ is indeterminate” is itself determinate according to the generic multiverse conception. In this sense the conception of truth is not “self-undermining” and one is not sent in a downward spiral where one has to countenance multiverses of multiverses. So it passes the first test. Whether it passes a more challenging test depends on the Ω Conjecture.

The Ω Conjecture has profound consequences for the generic multiverse conception of truth. Let

𝒱 Ω = {φ | ∅ ⊧ Ω φ}

and, for any specifiable cardinal κ, let

𝒱 Ω ( H (κ + )) = {φ | ZFC ⊧ Ω “ H (κ + ) ⊧ φ”},

where recall that H (κ + ) is the collection of sets of hereditary cardinality less than κ + . Thus, assuming ZFC and that there is a proper class of Woodin cardinals, the set 𝒱 Ω is Turing equivalent to the set of Π 2 generic multiverse truths and the set 𝒱 Ω ( H (κ + )) is precisely the set of generic multiverse truths of H (κ + ).

To describe the bearing of the Ω Conjecture on the generic-multiverse conception of truth, we introduce two Transcendence Principles which serve as constraints on any tenable conception of truth in set theory—a truth constraint and a definability constraint .

This constraint is in the spirit of those principles of set theory—most notably, reflection principles—which aim to capture the pretheoretic idea that the universe of sets is so rich that it cannot “be described from below”; more precisely, it asserts that any tenable conception of truth must respect the idea that the universe of sets is so rich that truth (or even just Π 2 -truth) cannot be described in some specifiable fragment. (Notice that by Tarski's theorem on the undefinability of truth, the truth constraint is trivially satisfied by the standard conception of truth in set theory which takes the multiverse to contain a single element, namely, V .)

There is also a related constraint concerning the definability of truth. For a specifiable cardinal κ, set Y ⊆ ω is definable in H (κ + ) across the multiverse if Y is definable in the structure H (κ + ) of each universe of the multiverse (possibly by formulas which depend on the parent universe).

Notice again that by Tarski's theorem on the undefinability of truth, the definability constraint is trivially satisfied by the degenerate multiverse conception that takes the multiverse to contain the single element V . (Notice also that if one modifies the definability constraint by adding the requirement that the definition be uniform across the multiverse, then the constraint would automatically be met.)

The bearing of the Ω Conjecture on the tenability of the generic-multiverse conception of truth is contained in the following two theorems:

In other words, if there is a proper class of Woodin cardinals and if the Ω Conjecture holds then the generic multiverse conception of truth violates both the Truth Constraint (at δ 0 ) and the Definability Constraint (at δ 0 ).

There are actually sharper versions of the above results that involve H ( c + ) in place of H (δ + 0 ).

In other words, if there is a proper class of Woodin cardinals and if the Ω Conjecture holds then the generic-multiverse conception of truth violates the Truth Constraint at the level of third-order arithmetic, and if, in addition, the AD + Conjecture holds, then the generic-multiverse conception of truth violates the Definability Constraint at the level of third-order arithmetic.

There appear to be four ways that the advocate of the generic multiverse might resist the above criticism.

First, one could maintain that the Ω Conjecture is just as problematic as CH and hence like CH it is to be regarded as indeterminate according to the generic-multiverse conception of truth. The difficulty with this approach is the following:

V ⊧ Ω-conjecture iff V 𝔹 ⊧ Ω-conjecture.

Thus, in contrast to CH, the Ω Conjecture cannot be shown to be independent of ZFC + “There is a proper class of Woodin cardinals” via set forcing. In terms of the generic multiverse conception of truth, we can put the point this way: While the generic-multiverse conception of truth deems CH to be indeterminate, it does not deem the Ω Conjecture to be indeterminate. So the above response is not available to the advocate of the generic-multiverse conception of truth. The advocate of that conception already deems the Ω Conjecture to be determinate.

Second, one could grant that the Ω Conjecture is determinate but maintain that it is false. There are ways in which one might do this but that does not undercut the above argument. The reason is the following: To begin with there is a closely related Σ 2 -statement that one can substitute for the Ω Conjecture in the above arguments. This is the statement that the Ω Conjecture is (non-trivially) Ω-satisfiable, that is, the statement: There exists an ordinal α and a universe V′ of the multiverse such that

V′ α ⊧ ZFC + “There is a proper class of Woodin cardinals”
V′ α ⊧ “The Ω Conjecture”.

This Σ 2 -statement is invariant under set forcing and hence is one adherents to the generic multiverse view of truth must deem determinate. Moreover, the key arguments above go through with this Σ 2 -statement instead of the Ω Conjecture. The person taking this second line of response would thus also have to maintain that this statement is false. But there is substantial evidence that this statement is true . The reason is that there is no known example of a Σ 2 -statement that is invariant under set forcing relative to large cardinal axioms and which cannot be settled by large cardinal axioms. (Such a statement would be a candidate for an absolutely undecidable statement.) So it is reasonable to expect that this statement is resolved by large cardinal axioms. However, recent advances in inner model theory—in particular, those in Woodin (2010)—provide evidence that no large cardinal axiom can refute this statement. Putting everything together: It is very likely that this statement is in fact true ; so this line of response is not promising.

Third, one could reject either the Truth Constraint or the Definability Constraint. The trouble is that if one rejects the Truth Constraint then on this view (assuming the Ω Conjecture) Π 2 truth in set theory is reducible in the sense of Turing reducibility to truth in H (δ 0 ) (or, assuming the Strong Ω Conjecture, H ( c + )). And if one rejects the Definability Constraint then on this view (assuming the Ω Conjecture) Π 2 truth in set theory is reducible in the sense of definability to truth in H (δ 0 ) (or, assuming the Strong Ω Conjecture, H ( c + )). On either view, the reduction is in tension with the acceptance of non-pluralism regarding the background theory ZFC + “There is a proper class of Woodin cardinals”.

Fourth, one could embrace the criticism, reject the generic multiverse conception of truth, and admit that there are some statements about H (δ + 0 ) (or H ( c + ), granting, in addition, the AD + Conjecture) that are true simpliciter but not true in the sense of the generic-multiverse, and yet nevertheless continue to maintain that CH is indeterminate. The difficulty is that any such sentence φ is qualitatively just like CH in that it can be forced to hold and forced to fail. The challenge for the advocate of this approach is to modify the generic-multiverse conception of truth in such a way that it counts φ as determinate and yet counts CH as indeterminate.

In summary: There is evidence that the only way out is the fourth way out and this places the burden back on the pluralist—the pluralist must come up with a modified version of the generic multiverse.

Further Reading : For more on the connection between Ω-logic and the generic multiverse and the above criticism of the generic multiverse see Woodin (2011a). For the bearing of recent results in inner model theory on the status of the Ω Conjecture see Woodin (2010).

5. The Local Case Revisited

Let us now turn to a second way in which one might resist the local case for the failure of CH. This involves a parallel case for CH. In Section 5.1 we will review the main features of the case for ¬CH in order to compare it with the parallel case for CH. In Section 5.2 we will present the parallel case for CH. In Section 5.3 we will assess the comparison.

Recall that there are two basic steps in the case presented in Section 3.3 . The first step involves Ω-completeness (and this gives ¬CH) and the second step involves maximality (and this gives the stronger 2 ℵ 0 = ℵ 2 ). For ease of comparison we shall repeat these features here:

The first step is based on the following result:

ZFC + A ⊧ Ω “ H (ω 2 ) ⊧ ¬CH”.
T A = {φ | ZFC + A ⊧ Ω “ H (ω 2 ) ⊧ ¬φ”}.

The theorem says that if there is a proper class of Woodin cardinals and the Strong Ω Conjecture holds, then there are (non-trivial) Ω-complete theories T A of H (ω 2 ) and all such theories contain ¬CH. In other words, under these assumptions, there is a “good” theory and all “good” theories imply ¬CH.

The second step begins with the question of whether there is greater agreement among the Ω-complete theories T A . Ideally, there would be just one. However, this is not the case.

Then there is an axiom B such that

This raises the issue as to how one is to select from among these theories? It turns out that there is a maximal theory among the T A and this is given by the axiom (∗).

is Ω -consistent, then

So, of the various theories T A involved in Theorem 5.1, there is one that stands out: The theory T (∗) given by (∗). This theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ 𝒫 (ℝ) ∩ L (ℝ)⟩. The fundamental result is that in this maximal theory

2 ℵ 0 = ℵ 2 .

The parallel case for CH also has two steps, the first involving Ω-completeness and the second involving maximality.

The first result in the first step is the following:

Moreover, up to Ω-equivalence, CH is the unique Σ 2 1 -statement that is Ω-complete for Σ 2 1 ; that is, letting T A be the Ω-complete theory given by ZFC + A where A is Σ 2 1 , all such T A are Ω-equivalent to T CH and hence (trivially) all such T A contain CH. In other words, there is a “good” theory and all “good” theories imply CH.

To complete the first step we have to determine whether this result is robust. For it could be the case that when one considers the next level, Σ 2 2 (or further levels, like third-order arithmetic) CH is no longer part of the picture, that is, perhaps large cardinals imply that there is an axiom A such that ZFC + A is Ω-complete for Σ 2 2 (or, going further, all of third order arithmetic) and yet not all such A have an associated T A which contains CH. We must rule this out if we are to secure the first step.

The most optimistic scenario along these lines is this: The scenario is that there is a large cardinal axiom L and axioms A → such that ZFC + L + A → is Ω-complete for all of third-order arithmetic and all such theories are Ω-equivalent and imply CH. Going further, perhaps for each specifiable fragment V λ of the universe of sets there is a large cardinal axiom L and axioms A → such that ZFC + L + A → is Ω-complete for the entire theory of V λ and, moreover, that such theories are Ω-equivalent and imply CH. Were this to be the case it would mean that for each such λ there is a unique Ω-complete picture of V λ and we would have a unique Ω-complete understanding of arbitrarily large fragments of the universe of sets. This would make for a strong case for new axioms completing the axioms of ZFC and large cardinal axioms.

Unfortunately, this optimistic scenario fails: Assuming the existence of one such theory one can construct another which differs on CH:

ZFC + L + A → is Ω-complete for Th( V λ ).
ZFC + L + B → is Ω-complete for Th( V λ )

This still leaves us with the question of existence and the answer to this question is sensitive to the Ω Conjecture and the AD + Conjecture:

In fact, under a stronger assumption, the scenario must fail at a much earlier level.

It is open whether there can be such a theory at the level of Σ 2 2 . It is conjectured that ZFC + ◇ is Ω-complete (assuming large cardinal axioms) for Σ 2 2 .

Let us assume that it is answered positively and return to the question of uniqueness. For each such axiom A , let T A be the Σ 2 2 theory computed by ZFC + A in Ω-logic. The question of uniqueness simply asks whether T A is unique.

 i. ZFC + A is Ω -satisfiable and ii. ZFC + A is Ω -complete for Σ 2 2 .
 i′. ZFC + B is Ω -satisfiable and ii′. ZFC + B is Ω -complete for Σ 2 2

This is the parallel of Theorem 5.2.

To complete the parallel one would need that CH is among all of the T A . This is not known. But it is a reasonable conjecture.

  • ZFC + A is Ω-satisfiable and
  • ZFC + A is Ω-complete for the Σ 2 2 .
ZFC + A ⊧ Ω CH.

Should this conjecture hold it would provide a true analogue of Theorem 5.1. This would complete the parallel with the first step.

There is also a parallel with the second step. Recall that for the second step in the previous subsection we had that although the various T A did not agree, they all contained ¬CH and, moreover, from among them there is one that stands out, namely the theory given by (∗), since this theory maximizes the Π 2 -theory of the structure ⟨ H (ω 2 ), ∈, I NS , A | A ∈ P (ℝ) ∩ L (ℝ)⟩. In the present context of CH we again (assuming the conjecture) have that although the T A do not agree, they all contain CH. It turns out that once again, from among them there is one that stands out, namely, the maximum one. For it is known (by a result of Woodin in 1985) that if there is a proper class of measurable Woodin cardinals then there is a forcing extension satisfying all Σ 2 2 sentences φ such that ZFC + CH + φ is Ω-satisfiable (see Ketchersid, Larson, & Zapletal (2010)). It follows that if the question of existence is answered positively with an A that is Σ 2 2 then T A must be this maximum Σ 2 2 theory and, consequently, all T A agree when A is Σ 2 2 . So, assuming that there is a T A where A is Σ 2 2 , then, although not all T A agree (when A is arbitrary) there is one that stands out, namely, the one that is maximum for Σ 2 2 sentences.

Thus, if the above conjecture holds, then the case of CH parallels that of ¬CH, only now Σ 2 2 takes the place of the theory of H (ω 2 ).

Assuming that the conjecture holds the case of CH parallels that of ¬CH, only now Σ 2 2 takes the place of the theory of H (ω 2 ): Under the background assumptions we have:

  • there are A such that ZFC + A is Ω-complete for H (ω 2 )
  • for every such A the associated T A contains ¬CH, and
  • there is a T A which is maximal, namely, T (∗) and this theory contains 2 ℵ 0 = ℵ 2 .
  • there are Σ 2 2 -axioms A such that ZFC + A is Ω-complete for Σ 2 2
  • for every such A the associated T A contains CH, and
  • there is a T A which is maximal.

The two situations are parallel with regard to maximality but in terms of the level of Ω-completeness the first is stronger. For in the first case we are not just getting Ω-completeness with regard to the Π 2 theory of H (ω 2 ) (with the additional predicates), rather we are getting Ω-completeness with regard to all of H (ω 2 ). This is arguably an argument in favour of the case for ¬CH, even granting the conjecture.

But there is a stronger point. There is evidence coming from inner model theory (which we shall discuss in the next section) to the effect that the conjecture is in fact false . Should this turn out to be the case it would break the parallel, strengthening the case for ¬CH.

However, one might counter this as follows: The higher degree of Ω-completeness in the case for ¬CH is really illusory since it is an artifact of the fact that under (∗) the theory of H (ω 2 ) is in fact mutually interpretable with that of H (ω 1 ) (by a deep result of Woodin). Moreover, this latter fact is in conflict with the spirit of the Transcendence Principles discussed in Section 4.3 . Those principles were invoked in an argument to the effect that CH does not have an answer. Thus, when all the dust settles the real import of Woodin's work on CH (so the argument goes) is not that CH is false but rather that CH very likely has an answer.

It seems fair to say that at this stage the status of the local approaches to resolving CH is somewhat unsettled. For this reason, in the remainder of this entry we shall focus on global approaches to settling CH. We shall very briefly discuss two such approaches—the approach via inner model theory and the approach via quasi-large cardinal axioms.

6. The Ultimate Inner Model

Inner model theory aims to produce “ L -like” models that contain large cardinal axioms. For each large cardinal axiom Φ that has been reached by inner model theory, one has an axiom of the form V = L Φ . This axiom has the virtue that (just as in the simplest case of V = L ) it provides an “effectively complete” solution regarding questions about L Φ (which, by assumption, is V ). Unfortunately, it turns out that the axiom V = L Φ is incompatible with stronger large cardinal axioms Φ'. For this reason, axioms of this form have never been considered as plausible candidates for new axioms.

But recent developments in inner model theory (due to Woodin) show that everything changes at the level of a supercompact cardinal. These developments show that if there is an inner model N which “inherits” a supercompact cardinal from V (in the manner in which one would expect, given the trajectory of inner model theory), then there are two remarkable consequences: First, N is close to V (in, for example, the sense that for sufficiently large singular cardinals λ, N correctly computes λ + ). Second, N inherits all known large cardinals that exist in V . Thus, in contrast to the inner models that have been developed thus far, an inner model at the level of a supercompact would provide one with an axiom that could not be refuted by stronger large cardinal assumptions.

The issue, of course, is whether one can have an “ L -like” model (one that yields an “effectively complete” axiom) at this level. There is reason to believe that one can. There is now a candidate model L Ω that yields an axiom V = L Ω with the following features: First, V = L Ω is “effectively complete.” Second, V = L Ω is compatible with all large cardinal axioms. Thus, on this scenario, the ultimate theory would be the (open-ended) theory ZFC + V = L Ω + LCA, where LCA is a schema standing for “large cardinal axioms.” The large cardinal axioms will catch instances of Gödelian independence and the axiom V = L Ω will capture the remaining instances of independence. This theory would imply CH and settle the remaining undecided statements. Independence would cease to be an issue.

It turns out, however, that there are other candidate axioms that share these features, and so the spectre of pluralism reappears. For example, there are axioms V = L Ω S and V = L Ω (∗) . These axioms would also be “effectively complete” and compatible with all large cardinal axioms. Yet they would resolve various questions differently than the axiom V = L Ω . For example, the axiom, V = L Ω (∗) would imply ¬CH. How, then, is one to adjudicate between them?

Further Reading : For an introduction to inner model theory see Mitchell (2010) and Steel (2010). For more on the recent developments at the level of one supercompact and beyond see Woodin (2010).

7. The Structure Theory of L ( V λ+1 )

This brings us to the second global approach, one that promises to select the correct axiom from among V = L Ω , V = L Ω S , V = L Ω (∗) , and their variants. This approach is based on the remarkable analogy between the structure theory of L (ℝ) under the assumption of AD L (ℝ) and the structure theory of L ( V λ+1 ) under the assumption that there is an elementary embedding from L ( V λ+1 ) into itself with critical point below λ. This embedding assumption is the strongest large cardinal axiom that appears in the literature.

The analogy between L (ℝ) and L ( V λ+1 ) is based on the observation that L (ℝ) is simply L ( V ω+1 ). Thus, λ is the analogue of ω, λ + is the analogue of ω 1 , and so on. As an example of the parallel between the structure theory of L (ℝ) under AD L (ℝ) and the structure theory of L ( V λ+1 ) under the embedding axiom, let us mention that in the first case, ω 1 is a measurable cardinal in L (ℝ) and, in the second case, the analogue of ω 1 —namely, λ + —is a measurable cardinal in L ( V λ+1 ). This result is due to Woodin and is just one instance from among many examples of the parallel that are contained in his work.

Now, we have a great deal of information about the structure theory of L (ℝ) under AD L (ℝ) . Indeed, as we noted above, this axiom is “effectively complete” with regard to questions about L (ℝ). In contrast, the embedding axiom on its own is not sufficient to imply that L ( V λ+1 ) has a structure theory that fully parallels that of L (ℝ) under AD L (ℝ) . However, the existence of an already rich parallel is evidence that the parallel extends, and we can supplement the embedding axiom by adding some key components. When one does so, something remarkable happens: the supplementary axioms become forcing fragile . This means that they have the potential to erase independence and provide non-trivial information about V λ+1 . For example, these supplementary axioms might settle CH and much more.

The difficulty in investigating the possibilities for the structure theory of L ( V λ+1 ) is that we have not had the proper lenses through which to view it. The trouble is that the model L ( V λ+1 ) contains a large piece of the universe—namely, L ( V λ+1 )—and the theory of this structure is radically underdetermined. The results discussed above provide us with the proper lenses. For one can examine the structure theory of L ( V λ+1 ) in the context of ultimate inner models like L Ω , L Ω S , L Ω (∗) , and their variants. The point is that these models can accommodate the embedding axiom and, within each, one will be able to compute the structure theory of L ( V λ+1 ).

This provides a means to select the correct axiom from among V = L Ω , V = L Ω S , V = L Ω (∗) , and their variants. One simply looks at the L ( V λ+1 ) of each model (where the embedding axiom holds) and checks to see which has the true analogue of the structure theory of L (ℝ) under the assumption of AD L (ℝ) . It is already known that certain pieces of the structure theory cannot hold in L Ω . But it is open whether they can hold in L Ω S .

Let us consider one such (very optimistic) scenario: The true analogue of the structure theory of L (ℝ) under AD L (ℝ) holds of the L ( V λ+1 ) of L Ω S but not of any of its variants. Moreover, this structure theory is “effectively complete” for the theory of V λ+1 . Assuming that there is a proper class of λ where the embedding axiom holds, this gives an “effectively complete” theory of V . And, remarkably, part of that theory is that V must be L Ω S . This (admittedly very optimistic) scenario would constitute a very strong case for axioms that resolve all of the undecided statements.

One should not place too much weight on this particular scenario. It is just one of many. The point is that we are now in a position to write down a list of definite questions with the following features: First, the questions on this list will have answers—independence is not an issue. Second, if the answers converge then one will have strong evidence for new axioms settling the undecided statements (and hence non-pluralism about the universe of sets); while if the answers oscillate, one will have evidence that these statements are “absolutely undecidable” and this will strengthen the case for pluralism. In this way the questions of “absolute undecidability” and pluralism are given mathematical traction.

Further Reading : For more on the structure theory of L ( V λ+1 ) and the parallel with determinacy see Woodin (2011b).

  • Abraham, U. and M. Magidor, 2010, “Cardinal arithmetic,” in Foreman and Kanamori 2010.
  • Bagaria, J., N. Castells, and P. Larson, 2006, “An Ω-logic primer,” in J. Bagaria and S. Todorcevic (eds), Set theory , Trends in Mathematics, Birkhäuser, Basel, pp. 1–28.
  • Cohen, P., 1963, “The independence of the continuum hypothesis I,” Proceedings of the U.S. National Academy of Sciemces , 50: 1143–48.
  • Foreman, M. and A. Kanamori, 2010, Handbook of Set Theory , Springer-Verlag.
  • Foreman, M. and M. Magidor, 1995, “Large cardinals and definable counterexamples to the continuum hypothesis,” Annals of Pure and Applied Logic 76: 47–97.
  • Foreman, M., M. Magidor, and S. Shelah, 1988, “Martin's Maximum, saturated ideals, and non-regular ultrafilters. Part i,” Annals of Mathematics 127: 1–47.
  • Gödel, K., 1938a. “The consistency of the axiom of choice and of the generalized continuum-hypothesis,” Proceedings of the U.S. National Academy of Sciences , 24: 556–7.
  • Gödel, K., 1938b. “Consistency-proof for the generalized continuum-hypothesis,” Proceedings of the U.S. National Academy of Sciemces , 25: 220–4.
  • Hallett, M., 1984, Cantorian Set Theory and Limitation of Size , Vol. 10 of Oxford Logic Guides , Oxford University Press.
  • Holz, M., K. Steffens, and E. Weitz, 1999, Introduction to Cardinal Arithmetic , Birkhäuser Advanced Texts, Birkhäuser Verlag, Basel.
  • Jech, T. J., 2003, Set Theory: Third Millennium Edition, Revised and Expanded , Springer-Verlag, Berlin.
  • Ketchersid, R., P. Larson, and J. Zapletal, 2010, “Regular embeddings of the stationary tower and Woodin's Sigma-2-2 maximality theorem.” Journal of Symbolic Logic 75(2):711–727.
  • Koellner, P., 2010, “Strong logics of first and second order,” Bulletin of Symbolic Logic 16(1): 1–36.
  • Koellner, P. and W. H. Woodin, 2009, “Incompatible Ω-complete theories,” The Journal of Symbolic Logic 74 (4).
  • Martin, D. A., 1976, “Hilbert's first problem: The Continuum Hypothesis,” in F. Browder (ed.), Mathematical Developments Arising from Hilbert's Problems , Vol. 28 of Proceedings of Symposia in Pure Mathematics , American Mathematical Society, Providence, pp. 81–92.
  • Mitchell, W., 2010, “Beginning inner model theory,” in Foreman and Kanamori 2010.
  • Steel, J. R., 2010, “An outline of inner model theory,” in Foreman and Kanamori 2010.
  • Woodin, W. H., 1999, The Axiom of Determinacy, Forcing Axioms, and the Nonstationary Ideal , Vol. 1 of de Gruyter Series in Logic and its Applications , de Gruyter, Berlin.
  • –––, 2001a, “The continuum hypothesis, part I,” Notices of the American Mathematical Society 48(6): 567–576.
  • –––, 2001b, “The continuum hypothesis, part II,” Notices of the American Mathematical Society 48(7): 681–690.
  • –––, 2005a, “The continuum hypothesis,” in R. Cori, A. Razborov, S. Todorĉević and C. Wood (eds), Logic Colloquium 2000 , Vol. 19 of Lecture Notes in Logic , Association of Symbolic Logic, pp. 143–197.
  • –––, 2005b, “Set theory after Russell: the journey back to Eden,” in G. Link (ed.), One Hundred Years Of Russell's Paradox: Mathematics, Logic, Philosophy , Vol. 6 of de Gruyter Series in Logic and Its Applications , Walter De Gruyter Inc, pp. 29–47.
  • –––, 2010, “Suitable extender models I,” Journal of Mathematical Logic 10(1–2): 101–339.
  • –––, 2011a, “The Continuum Hypothesis, the generic-multiverse of sets, and the Ω-conjecture,” in J. Kennedy and R. Kossak, (eds), Set Theory, Arithmetic, and Foundations of Mathematics: Theorems, Philosophies , Vol. 36 of Lecture Notes in Logic , Cambridge University Press.
  • –––, 2011b, “Suitable extender models II,” Journal of Mathematical Logic 11(2): 115–436.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

Gödel, Kurt | set theory | set theory: early development | set theory: large cardinals and determinacy

Copyright © 2013 by Peter Koellner < koellner @ fas . harvard . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

continuum hypothesis

Continuum Hypothesis

Gödel showed that no contradiction would arise if the continuum hypothesis were added to conventional Zermelo-Fraenkel set theory . However, using a technique called forcing , Paul Cohen (1963, 1964) proved that no contradiction would arise if the negation of the continuum hypothesis was added to set theory . Together, Gödel's and Cohen's results established that the validity of the continuum hypothesis depends on the version of set theory being used, and is therefore undecidable (assuming the Zermelo-Fraenkel axioms together with the axiom of choice ).

Woodin (2001ab, 2002) formulated a new plausible "axiom" whose adoption (in addition to the Zermelo-Fraenkel axioms and axiom of choice ) would imply that the continuum hypothesis is false. Since set theoreticians have felt for some time that the Continuum Hypothesis should be false, if Woodin's axiom proves to be particularly elegant, useful, or intuitive, it may catch on. It is interesting to compare this to a situation with Euclid's parallel postulate more than 300 years ago, when Wallis proposed an additional axiom that would imply the parallel postulate (Greenberg 1994, pp. 152-153).

Portions of this entry contributed by Matthew Szudzik

Explore with Wolfram|Alpha

WolframAlpha

More things to try:

  • continuum hypothesis
  • {1/4, -1/2, 1} cross {1/3, 1, -2/3}
  • common multiples of 10, 25

Cite this as:

Szudzik, Matthew and Weisstein, Eric W. "Continuum Hypothesis." From MathWorld --A Wolfram Web Resource. https://mathworld.wolfram.com/ContinuumHypothesis.html

Subject classifications

September 16, 2017

Mathematicians Measure Infinities, and Find They're Equal

Proof rests on a surprising link between infinity size and the complexity of mathematical theories

By Kevin Hartnett & Quanta Magazine

continuum hypothesis

Saul Gravy  Getty Images

From  Quanta Magazine  ( find original story here ).

In a breakthrough that disproves decades of conventional wisdom, two mathematicians have shown that two different variants of infinity are actually the same size. The advance touches on one of the most famous and intractable problems in mathematics: whether there exist infinities between the infinite size of the natural numbers and the larger infinite size of the real numbers.

The problem was first identified over a century ago. At the time, mathematicians knew that “the real numbers are bigger than the natural numbers, but not how much bigger. Is it the next biggest size, or is there a size in between?” said Maryanthe Malliaris of the University of Chicago, co-author of the new work along with Saharon Shelah of the Hebrew University of Jerusalem and Rutgers University.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

In their new work, Malliaris and Shelah resolve a related 70-year-old question about whether one infinity (call it  p ) is smaller than another infinity (call it  t ). They proved the two are in fact equal, much to the surprise of mathematicians.

“It was certainly my opinion, and the general opinion, that  p  should be less than  t ,” Shelah said.

Malliaris and Shelah published their proof  last year  in the  Journal of the American Mathematical Society  and were  honored this past July with one of the top prizes in the field of set theory. But their work has ramifications far beyond the specific question of how those two infinities are related. It opens an unexpected link between the sizes of infinite sets and a parallel effort to map the complexity of mathematical theories.

Many Infinities

The notion of infinity is mind-bending. But the idea that there can be different sizes of infinity? That’s perhaps the most counterintuitive mathematical discovery ever made. It emerges, however, from a matching game even kids could understand.

Suppose you have two groups of objects, or two “sets,” as mathematicians would call them: a set of cars and a set of drivers. If there is exactly one driver for each car, with no empty cars and no drivers left behind, then you know that the number of cars equals the number of drivers (even if you don’t know what that number is).

In the late 19th century, the German mathematician Georg Cantor captured the spirit of this matching strategy in the formal language of mathematics. He proved that two sets have the same size, or “cardinality,” when they can be put into one-to-one correspondence with each other—when there is exactly one driver for every car. Perhaps more surprisingly, he showed that this approach works for infinitely large sets as well.

Consider the natural numbers: 1, 2, 3 and so on. The set of natural numbers is infinite. But what about the set of just the even numbers, or just the prime numbers? Each of these sets would at first seem to be a smaller subset of the natural numbers. And indeed, over any finite stretch of the number line, there are about half as many even numbers as natural numbers, and still fewer primes.

Yet infinite sets behave differently. Cantor showed that there’s a one-to-one correspondence between the elements of each of these infinite sets.

Because of this, Cantor concluded that all three sets are the same size. Mathematicians call sets of this size “countable,” because you can assign one counting number to each element in each set.

After he established that the sizes of infinite sets can be compared by putting them into one-to-one correspondence with each other, Cantor made an even bigger leap: He proved that some infinite sets are even larger than the set of natural numbers.

Consider the real numbers, which are all the points on the number line. The real numbers are sometimes referred to as the “continuum,” reflecting their continuous nature: There’s no space between one real number and the next. Cantor was able to show that the real numbers can’t be put into a one-to-one correspondence with the natural numbers: Even after you create an infinite list pairing natural numbers with real numbers, it’s always possible to come up with another real number that’s not on your list. Because of this, he concluded that the set of real numbers is larger than the set of natural numbers. Thus, a second kind of infinity was born: the uncountably infinite.

What Cantor couldn’t figure out was whether there exists an intermediate size of infinity—something between the size of the countable natural numbers and the uncountable real numbers. He guessed not, a conjecture now known as the continuum hypothesis.

In 1900, the German mathematician David Hilbert made a list of 23 of the most important problems in mathematics. He put the continuum hypothesis at the top. “It seemed like an obviously urgent question to answer,” Malliaris said.

In the century since, the question has proved itself to be almost uniquely resistant to mathematicians’ best efforts. Do in-between infinities exist? We may never know.

Throughout the first half of the 20th century, mathematicians tried to resolve the continuum hypothesis by studying various infinite sets that appeared in many areas of mathematics. They hoped that by comparing these infinities, they might start to understand the possibly non-empty space between the size of the natural numbers and the size of the real numbers.

Many of the comparisons proved to be hard to draw. In the 1960s, the mathematician Paul Cohen explained why. Cohen developed a method called “forcing” that demonstrated that the continuum hypothesis is independent of the axioms of mathematics—that is, it couldn’t be proved within the framework of set theory. (Cohen’s work complemented work by Kurt Gödel in 1940 that showed that the continuum hypothesis couldn’t be disproved within the usual axioms of mathematics.)

Cohen’s work won him the Fields Medal (one of math’s highest honors) in 1966. Mathematicians subsequently used forcing to resolve many of the comparisons between infinities that had been posed over the previous half-century, showing that these too could not be answered within the framework of set theory. (Specifically, Zermelo-Fraenkel set theory plus the axiom of choice.)

Some problems remained, though, including a question from the 1940s about whether  p  is equal to  t . Both  p  and  t  are orders of infinity that quantify the minimum size of collections of subsets of the natural numbers in precise (and seemingly unique) ways.

The details of the two sizes don’t much matter. What’s more important is that mathematicians quickly figured out two things about the sizes of  p  and  t . First, both sets are larger than the natural numbers. Second,  p  is always less than or equal to  t . Therefore, if  p  is less than  t , then  p  would be an intermediate infinity—something between the size of the natural numbers and the size of the real numbers. The continuum hypothesis would be false.

Mathematicians tended to assume that the relationship between  p  and  t  couldn’t be proved within the framework of set theory, but they couldn’t establish the independence of the problem either. The relationship between  p  and  t  remained in this undetermined state for decades. When Malliaris and Shelah found a way to solve it, it was only because they were looking for something else.

An Order of Complexity

Around the same time that Paul Cohen was forcing the continuum hypothesis beyond the reach of mathematics, a very different line of work was getting under way in the field of model theory.

For a model theorist, a “theory” is the set of axioms, or rules, that define an area of mathematics. You can think of model theory as a way to classify mathematical theories—an exploration of the source code of mathematics. “I think the reason people are interested in classifying theories is they want to understand what is really causing certain things to happen in very different areas of mathematics,” said H. Jerome Keisler, emeritus professor of mathematics at the University of Wisconsin, Madison.

In 1967, Keisler introduced what’s now called Keisler’s order, which seeks to classify mathematical theories on the basis of their complexity. He proposed a technique for measuring complexity and managed to prove that mathematical theories can be sorted into at least two classes: those that are minimally complex and those that are maximally complex. “It was a small starting point, but my feeling at that point was there would be infinitely many classes,” Keisler said.

It isn’t always obvious what it means for a theory to be complex. Much work in the field is motivated in part by a desire to understand that question. Keisler describes complexity as the range of things that can happen in a theory—and theories where more things can happen are more complex than theories where fewer things can happen.

A little more than a decade after Keisler introduced his order, Shelah published an influential book, which included an important chapter showing that there are naturally occurring jumps in complexity—dividing lines that distinguish more complex theories from less complex ones. After that, little progress was made on Keisler’s order for 30 years.

Then, in her 2009 doctoral thesis and other early papers, Malliaris reopened the work on Keisler’s order and provided new evidence for its power as a classification program. In 2011, she and Shelah started working together to better understand the structure of the order. One of their goals was to identify more of the properties that make a theory maximally complex according to Keisler’s criterion.

Malliaris and Shelah eyed two properties in particular. They already knew that the first one causes maximal complexity. They wanted to know whether the second one did as well. As their work progressed, they realized that this question was parallel to the question of whether  p  and  t  are equal. In 2016, Malliaris and Shelah published a 60-page paper that solved both problems: They proved that the two properties are equally complex (they both cause maximal complexity), and they proved that  p  equals  t . 

“Somehow everything lined up,” Malliaris said. “It’s a constellation of things that got solved.”

This past July, Malliaris and Shelah were awarded the Hausdorff medal, one of the top prizes in set theory. The honor reflects the surprising, and surprisingly powerful, nature of their proof. Most mathematicians had expected that  p  was less than  t , and that a proof of that inequality would be impossible within the framework of set theory. Malliaris and Shelah proved that the two infinities are equal. Their work also revealed that the relationship between  p  and  t  has much more depth to it than mathematicians had realized.

“I think people thought that if by chance the two cardinals were provably equal, the proof would maybe be surprising, but it would be some short, clever argument that doesn’t involve building any real machinery,” said Justin Moore, a mathematician at Cornell University who has published a  brief overview  of Malliaris and Shelah’s proof.

Instead, Malliaris and Shelah proved that  p  and  t  are equal by cutting a path between model theory and set theory that is already opening new frontiers of research in both fields. Their work also finally puts to rest a problem that mathematicians had hoped would help settle the continuum hypothesis. Still, the overwhelming feeling among experts is that this apparently unresolvable proposition is false: While infinity is strange in many ways, it would be almost too strange if there weren’t many more sizes of it than the ones we’ve already found.

Reprinted with permission from  Quanta Magazine , an editorially independent publication of the  Simons Foundation

  whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

  • Continuum Hypothesis
  • 1.1 Generalized Continuum Hypothesis
  • 2 Hilbert $23$
  • 3 Historical Note

There is no set whose cardinality is strictly between that of the integers and the real numbers .

Symbolically, the continuum hypothesis asserts:

Generalized Continuum Hypothesis

The Generalized Continuum Hypothesis is the proposition:

Let $x$ and $y$ be infinite sets .

In other words, there are no infinite cardinals between $x$ and $\powerset x$.

Hilbert $23$

This problem is no. $1$ in the Hilbert $23$ .

Historical Note

The Continuum Hypothesis was originally conjectured by Georg Cantor .

In $1940$, Kurt Gödel showed that it is impossible to disprove the Continuum Hypothesis (CH for short) in Zermelo-Fraenkel set theory (ZF) with or without the Axiom of Choice ( ZFC ).

In $1963$, Paul Cohen showed that it is impossible to prove CH in ZF or ZFC .

These results together show that CH is independent of both ZF and ZFC .

Note, however, that these results do not settle CH one way or the other, nor do they establish that CH is undecidable.

They merely indicate that CH cannot be proved within the scope of ZF or ZFC , and that any further progress will depend on further insights on the nature of sets and their cardinality .

It has been suggested that a key factor contributing towards the difficulty in resolving this question may be the fact that Gödel's Incompleteness Theorems prove that there is no possible formal axiomatization of set theory that can represent the entire spread of possible properties that can uniquely specify any possible set .

  • 1996:  H. Jerome Keisler  and Joel Robbin : Mathematical Logic and Computability  ... (previous)  ... (next) : Appendix $\text{A}.6$: Cardinality
  • 1972:  A.G. Howson : A Handbook of Terms used in Algebra and Analysis  ... (previous)  ... (next) : $\S 4$: Number systems $\text{I}$: A set-theoretic approach
  • 1998:  David Nelson : The Penguin Dictionary of Mathematics  (2nd ed.)  ... (previous)  ... (next) : continuum hypothesis
  • 2008:  Paul Halmos  and Steven Givant : Introduction to Boolean Algebras  ... (previous) : Appendix $\text{A}$: Set Theory: Cardinal Numbers
  • 2008:  David Nelson : The Penguin Dictionary of Mathematics  (4th ed.)  ... (previous)  ... (next) : continuum hypothesis
  • 2010:  Raymond M. Smullyan  and Melvin Fitting : Set Theory and the Continuum Problem  (revised ed.)  ... (previous)  ... (next) : Chapter $1$: General Background: $\S 5$ The continuum problem
  • Cardinality of Continuum
  • Named Theorems
  • Open Questions

Navigation menu

Search

www.springer.com The European Mathematical Society

  • StatProb Collection
  • Recent changes
  • Current events
  • Random page
  • Project talk
  • Request account
  • What links here
  • Related changes
  • Special pages
  • Printable version
  • Permanent link
  • Page information
  • View source

Continuum hypothesis

The hypothesis, due to G. Cantor (1878), stating that every infinite subset of the continuum $\mathbf{R}$ is either equivalent to the set of natural numbers or to $\mathbf{R}$ itself. An equivalent formulation (in the presence of the axiom of choice ) is: $$ 2^{\aleph_0} = \aleph_1 $$ (see Aleph ). The generalization of this equality to arbitrary cardinal numbers is called the generalized continuum hypothesis (GCH): For every ordinal number $\alpha$, \begin{equation} \label{eq:1} 2^{\aleph_\alpha} = \aleph_{\alpha+1} \ . \end{equation}

In the absence of the axiom of choice, the generalized continuum hypothesis is stated in the form \begin{equation} \label{eq:2} \forall \mathfrak{k} \,\,\neg \exists \mathfrak{m}\ (\,\mathfrak{k} < \mathfrak{m} < 2^{\mathfrak{k}}\,) \end{equation} where $\mathfrak{k}$,$\mathfrak{m}$ stand for infinite cardinal numbers. The axiom of choice and (1) follow from (2), while (1) and the axiom of choice together imply (2).

D. Hilbert posed, in his celebrated list of problems, as Problem 1 that of proving Cantor's continuum hypothesis (the problem of the continuum). This problem did not yield a solution within the framework of traditional set-theoretical methods of solution. Among mathematicians the conviction grew that the problem of the continuum was in principle unsolvable. It was only after a way had been found of reducing mathematical concepts to set-theoretical ones, axioms had been stated in set-theoretical language which could be placed at the foundations of mathematical proofs actually encountered in real life and logical derivation methods had been formalized, that it became possible to give a precise statement, and then to solve the question, of the formal unsolvability of the continuum hypothesis. Formal unsolvability is understood in the sense that there does not exist a formal derivation in the Zermelo–Fraenkel system ZF either for the continuum hypothesis or for its negation.

In 1939 K. Gödel established the unprovability of the negation of the generalized continuum hypothesis (and hence the unprovability of the negation of the continuum hypothesis) in the system ZF with the axiom of choice (the system ZFC) under the hypothesis that ZF is consistent (see Gödel constructive set ). In 1963 P. Cohen showed that the continuum hypothesis (and therefore also the generalized continuum hypothesis) cannot be deduced from the axioms of ZFC assuming the consistency of ZF (see Forcing method ).

Are these results concerning the problem of the continuum final? The answer to this question depends on one's relation to the premise concerning the consistency of ZF and, what is more significant, to the experimental fact that every meaningful mathematical proof (of traditional classical mathematics) can, after it has been found, be adequately stated in the system ZFC. This fact cannot be proved nor can it even be precisely stated, since each revision raises a similar question concerning the adequacy of the revision for the revised theorem.

In model-theoretic language, Gödel and Cohen constructed models for ZFC in which $$ 2^{\mathfrak{k}} = \begin{cases} \mathfrak{m} & \text{if}\ \mathfrak{k} < \mathfrak{m}\,; \\ \mathfrak{k}^{+} & \text{if}\ \mathfrak{k} \ge \mathfrak{m} \ . \end{cases} $$

where $\mathfrak{m}$ is an arbitrary uncountable regular cardinal number given in advance, and $\mathfrak{k}^{+}$ is the first cardinal number greater than $\mathfrak{k}$. What is the possible behaviour of the function $2^{\mathfrak{k}}$ in various models of ZFC?

It is known that for regular cardinal numbers $\mathfrak{k}$, this function can take them to arbitrary cardinal numbers subject only to the conditions $$ \mathfrak{k} < \mathfrak{k}' \Rightarrow 2^{\mathfrak{k}} < 2^{\mathfrak{k}'} \,,\ \ \ \mathfrak{k} < \text{cf}(\mathfrak{k}) $$ where $\text{cf}(\mathfrak{a})$ is the smallest cardinal number cofinal with $\mathfrak{a}$ (see Cardinal number ). For singular (that is, non-regular) $\mathfrak{k}$, the value of the function $2^{\mathfrak{k}}$ may depend on its behaviour at smaller cardinal numbers. E.g., if \eqref{eq:1} holds for all $\alpha < \omega_1$, then it also holds for $\alpha = \omega_1$.

  • This page was last edited on 2 November 2023, at 17:48.
  • Privacy policy
  • About Encyclopedia of Mathematics
  • Disclaimers
  • Impressum-Legal

IMAGES

  1. PPT

    continuum hypothesis

  2. Continuum Hypothesis

    continuum hypothesis

  3. PPT

    continuum hypothesis

  4. The continuum hypothesis

    continuum hypothesis

  5. 1: Mean Free Path and Requirements for Satisfaction of Continuum

    continuum hypothesis

  6. Matthew infinitypresentation

    continuum hypothesis

VIDEO

  1. Continuum Hypothesis

  2. “The Continuum Hypothesis and the Search for Ultimate (Mathematical) Truth,” W. Hugh Woodin

  3. Fluid Mechanics

  4. 36. Set Theory. The continuum hypothesis

  5. Epoch of Unlight

  6. Mathematical Logic, part 4: continuum hypothesis