Integers (symbol Z) are countable as well. The set of integers contains zero and all natural numbers as well as their negatives. To count them all, just start with 0, then 1 and -1, then 2 and -2, etc.

Rational numbers (symbol Q) are those numbers that are expressible as fractions, e.g. 1/2. Rational numbers are also countable. For an idea of how to count them, imagine a two-dimensional grid that has integers on each axis, and contains infinitely many horizontal and vertical lines, one for each integer on the respective axis. To count all rational numbers, start at (0,0). Then, count all intersections on the border of the square between (-1,-1) and (1,1). Then, count all intersections on the border of the larger square between (-2,-2) and (2,2). And so on.

Algebraic numbers are those numbers that are roots of a polynomial. Algebraic numbers include e.g. the square root of 2, which is the root if (x^2 - 2 = 0). Algebraic numbers aren't necessarily rational, but they are also countable. They are countable because their quantity is limited by the quantity of polynomials, and polynomials are countable. (A polynomial is determined by its coefficients, and the coefficients can be written down as a number.)

Finally, we arrive at real numbers. Real numbers (symbol R) are an expansion of rational (and algebraic) numbers that is intended to address a certain deficiency. The deficiency is that there exist convergent series of rational/algebraic numbers which appear to converge to

*something*, but that something is not a rational (or algebraic) number. A classic example of such a convergence is Pi.

The idea behind real numbers is to have a set which, in addition to rational numbers, also contains the converging points of all converging series that can be constructed within the set. This makes the set "complete".

The traditional idea is to envision a real number as an infinitely long series of zeroes and ones (i.e. the number itself expressed in binary, with a decimal point some place).

If one imagines a real number like that, then it intuitively follows that a set of all-real-numbers is the set of all such infinite-series-of-0-and-1.

Cantor's diagonal argument can then be used to show that a set defined that way is uncountable. If you have an infinite set of unique infinite-series-of-0-and-1, such that each series in the set is marked with an index; then you can construct another infinite-series-of-0-and-1 which differs from all the other series that have an index. So therefore, the indexing is not complete. The set contains more series than can be counted.

Well yes, but: if we are constructing the set of real numbers as the set of rational numbers + the converging points of all converging series, then aren't the converging series countable? Every converging series must be the result of some algorithm. But algorithms are expressible as code, and code is expressible as numbers. So the number of possible algorithms is countable. So the number of all converging series must also be countable. If we construct the set of real numbers this way, then, it is a union of two countable sets - so it must itself be countable.

Does it not follow, then, that the standard envisioning of real numbers as infinite-series-of-0-and-1 must include an infinite number of them that are neither rational, nor the result of any limit? If so, what is the use of all these extra, unidentifiable "real numbers"?

It sure seems that, all those real numbers that we can actually express, or use in any way - are countable.

Or is the number of converging series uncountable? If so - how?

## 24 comments:

I think there's an error in your statement that "Every converging series must be the result of some algorithm".

You may want to read about mathematics of Gregory Chaitin, which is very interesting even for non-mathematician. It states, among other interesting things, that there are numbers that can't be computed by any algorithm. Such numbers may even be useful, as is the case with the number of wisdom.

From what I read about Chaitin's constant, it seems a ridiculous concept.

It represents a probability that a "randomly chosen" algorithm will halt. How do you choose "randomly" from an infinite set?

It depends on the program encoding used for algorithms. How can the probability that a randomly chosen algorithm will halt, depend on the encoding?

It can be "used in principle" to solve Goldbach's conjecture and Riemann's hypothesis, but the process of doing so involves simultaneously running "all programs of all lengths". WTF? How do you do that?

I await evidence about how numbers that can't be approximated by any algorithm may be useful. :)

More specifically, I want to know, why do numbers that cannot be computed by any algorithm, need to be included in the Real field?

A Real field that includes those things seems to me like a book that describes a forest full of trees and leaves and critters and undergrowth and bushes and worms, but then goes on at length to also talk about invisible unicorns, invisible elves, invisible gnomes and other invisible creatures that

supposedlyexist, but no one has ever seen or interacted with.It's just pointless.

In wikipedia article (as well as my memory of a book describing the same constant) states that:

"The proof of this fact relies on an algorithm which, given the first n digits of Ω, solves Turing's halting problem for programs of length up to n".

The omega is also called "the number of wisdom". I don't know details, proof of its properties, but given the above, ability to solve Halting problems for up to n-bits is hardly useless. In fact, I am not aware of any other number more important or more interesting than Omega.

Yes.

And I have an invisible dragon living in my garage that doesn't breathe air, doesn't emit any thermal radiation, and whose existence can't be proved. But it shits invisible $100 bills.

Granted, those invisible $100 dollar bills are just as unprovable as the dragon. But they're $100 bills, see, and that's why I love my dragon!

Ω is

uncomputable. It cannot be used to solve or proveanything.Besides, I think we're missing he point here. I'm not arguing that no one should think about concepts like Ω.

I'm just asking why, exactly, uncomputable numbers have to be considered members of the Real field.

I suspect there might be a reason for that which has to do with the least upper bound requirement that distinguishes the Real field from the Rational field.

But still, deciding to include such numbers makes the Real field uncountable. This introduces paradoxes that require the axiom of choice to resolve.

If the Real field were countable, then there's no problem in saying "for every x", because you can at least pretend that you can enumerate every x - even if it takes forever.

If the Real field is

uncountable, then saying "for every x" poses a problem, because how is it that you get to all those uncomputable x-es, which far exceed the computable ones in number?Then you need the axiom of choice where you simply

assumethat you can get to those x-es, even though there's really no way to do so. Even if youdohave forever.This introduces the risk of building mathematical foundations on what boils down to thin air.

I have no problem with Ω as a concept, but I see that type of thing as a hypothetical, toy concept, for which there's no clear reason why it needs to be included in the Real field.

But if there is a reason, I am interested in what it is.

You seem to be confusing several issues.

The number of rational Cauchy sequences is uncountable. In fact, a well-known theorem states that the number of Cauchy sequences that are equivalent to a given Cauchy sequence is itself uncountable. Hence, the reals are uncountable

However, most Cauchy sequences cannot be generated algorithmically. Numbers that can be generated algorithmically are called, appropriately enough, computable numbers. http://en.wikipedia.org/wiki/Computable_real_number. The set of computer numbers is countable. It is a subset of the reals and a superset of the rationals. It also includes all of the algebraic numbers and many of the transcendentals such as π and e.

So why do we need the reals when we have the computable numbers? It depends entirely of what type of mathematicians we are. If we are constructivists (see http://en.wikipedia.org/wiki/Constructivism_(mathematics)), or the axiom of choice bothers us, then we feel more secure with the idea of computable numbers. However, we must be willing the accept “holes” in our metric spaces and that we cannot take the least upper bound of an infinite ordered set. If we are classical mathematicians, then these restrictions are unacceptable.

However, if we are just software engineers, then we do not care about the reals or the computables. The rationals covers everything we need.

Hello Thomas,

thanks for that reply.

Yes, I suspected that the least upper bound criterion has something to do with it.

Do you know - or can you perhaps point to an answer? - whether or not the following is true:

In the field of all computable numbers, can it or can it not be shown that the least upper bound of every algorithmically generate-able convergent series (as opposed to every Cauchy series) in this field, is also itself a computable number?

I'm also having some trouble understanding the value of Cauchy series that are not algorithmically generatable. Can you show an example of an uncomputable Cauchy series that might illustrate how such series can be useful?

Basically, what puzzles me is why mathematicians spend so much time dealing in Unicorns. :)

Hi Denis,

There is only one complete ordered field, which the reals. Hence, it is not true that, in the field of computable numbers, every subset with an upper bound has a least upper bound. When we talk about least upper bound, this refers to a subset of an ordered field. When we talk limits, this refers to infinite sequences. Are you asking about sets or sequences?

There are many useful mathematical concepts that not susceptible to algorithmic definition. Chaitin's constant is one example. You can do a search on “definable numbers” and get other examples. Also checkout measure theory.

It’s easy to get into a mode where you say “if I can’t even construct it, I don’t need it.” This limits you to countable sets, which is all well and good. But consider this: if you believe in the concept of infinite sets, such as the natural numbers, and you believe in the concept of the power set (set of all subsets of a given set), then you have a problem. This is because the power set of the natural numbers has uncountably many elements, and indeed cannot exist without the axiom of choice. You can see how deeply the AC informs our everyday thinking.

Hey Thomas,

I think you misunderstood my question. I wasn't asking whether

everyCauchy sequence of computable numbers has a lowest upper bound that is a computable number. I understand that this is false.What I was asking was whether or not every

algorithmically constructibleconvergent sequence of computable numbers has a lowest upper bound that is a computable number. My question disregards all Cauchy sequences that can't be generated algorithmically, and considers only those that can. If the computable field turned out to be complete inthisrespect, then perhaps itcouldbe considered complete for constructivist purposes. If the only lowest upper bounds that don't exist in this field are those of sequences you can't compute, then there's no problem.With regard to power sets of infinite sets: it's not that I don't "believe" in such constructs, it's that I don't see how they're useful. It is this understanding that I seek - or else, to find out they are actually not useful.

I find the concept of a power set of an infinite set really problematic. I don't see how we can just gloss over the fact that, oh, well, it's uncountable. Whatever it is, it contains infinities of elements that we can't even imagine. How can mathematics be built on such undisprovable foundations?

Why isn't it enough to just talk about power sets of increasingly large finite sets, instead of assuming the existence of some end-all-be-all power set of an infinite set, that can't actually exist - just like the number "infinity" doesn't?

Also: you mention Chaitin's constant as an example of a useful uncomputable number, but you don't show

howit is useful. In most of my comments before your first one, I was replying to Boris about how Chaitin's constant isnotuseful.In order for uncomputables to be useful, they have to teach us stuff about computables. Is there any way that you can use one of these unicorns to come to conclusions about zebras and horses, or is it all just unicorns from there on?

Hi Denis,

You raise a deep and interesting question. If computable numbers are an ordered field that includes all of the algebraic numbers and all of the transcendentals that we know and love, then what more do we need? Are these not the only numbers that we will actually ever use? Isn’t everything else superfluous mathematical mumbo jumbo?

I am not sure how to answer that question. To be honest I never thought about it before I read your blog while I was looking for a Windows SFTP server. However, with a little research I confirmed my original intuition about the set of computable numbers: it does not possess the least upper bound property. In other words, there exist computable, strictly increasing, bounded sequences of rational numbers whose least upper bound (a.k.a. supremum) is not a computable number. Such sequences are called Specker sequences and it is rather straight forward to construct one. Furthermore, at least some Specker sequences are also rational Cauchy sequences. Hence computable numbers are not a complete metric space either.

As for the usefulness of real numbers that are not computable, I personally think that it’s all useful. If nothing else they tell us about the structure of reality. Before Cantor infinity was infinity. We now know that there are profound distinctions between different types of infinity. So by your definition of useful, the reals are useful because they tell us what the computable numbers are not.

About power sets of the natural numbers, I was merely pointing out an inconsistency in arguments against the axiom of choice. But what is an “increasingly large finite set”? When you take the limit of "increasingly large finite set," you get an infinite set.

Hey Thomas!

Thanks for that reply. I quote:

Thomas:"In other words, there exist computable, strictly increasing, bounded sequences of rational numbers whose least upper bound (a.k.a. supremum) is not a computable number. Such sequences are called Specker sequences and it is rather straight forward to construct one."I looked into Specker sequences, and I'm having some serious trouble seeing how the example construction, as shown here, is computable.

This construction requires a recursively enumerable set A in order to calculate q_n = Sum[i=0..n, 2^(-a_i - 1)].

The problem is, for this to be computable, the set A has to be not just "recursively enumerable", but actually enumerable. However, after a considerable time spent searching, I am unable to find a "recursively enumerable" set that is, actually, enumerable.

It seems to me that "recursively enumerable" is a misnomer. Other sources seem to use "partially decidable", which seems more appropriate. The definition of such a set does not require the existence of any algorithm that would

enumeratethe first N members of such a set, in finite time, for any N. The definition merely requires a "maybe-decide" function which returns in finite time if an element is amemberof the set.So far, all the examples of partially decidable sets I was able to find were not actually enumerable. For each of those sets, a "maybe-decide" function was apparent, but there didn't seem to be any enumeration algorithm that would return N numbers from the set in finite time for any N.

The question, then, is whether there are any partially decidable sets, which are actually enumerable? An

actuallyenumerable set is required for the elements of a Specker sequence to be computable.Thomas:"We now know that there are profound distinctions between different types of infinity."I am not entirely convinced that the "different types of infinity" are not merely delusions - the results of a sequence of logical arguments whose foundations are in clouds.

In classical computing, we only have one conceptual infinity at our disposal. We have the concept of infinite time. At any given actual moment in time, we will be limited by a finite amount of computing power, and a finite amount of storage. Any mathematics that assumes more computing power than this has its foundations in the clouds. Its results, it seems to me, are also in the clouds.

I'm not sure if the same can be said about quantum computing. Perhaps quantum computing can introduce additional infinities. I don't know enough about that to say.

Thomas:"But what is an “increasingly large finite set”? When you take the limit of "increasingly large finite set," you get an infinite set."But that's nearly identical to this:

But what is an "increasingly large natural number"? When you take the limit of "increasingly large natural number", you get infinity.

And yet "infinity" is not considered a member of not even reals, let alone any computable numbers.

Hmm - I see now how dovetailing can be used to run "maybe-decide" functions concurrently, in this fashion:

"Perform the first step of the first program; next, perform the first step of the second program and the second step of the first program; next, perform the first step of the third program, the second step of the second program, and the third step of the first program; and so on."It sounds plausible that, for any N, this would result in an eventual unordered enumeration of N elements of the set in finite time. If that is so, then yes, it sounds like a deterministic Specker sequence is computable, and yet we have no way of approximating its limit. In fact, knowing its limit would be tantamount to solving the halting problem. Interesting.

So, that begs the next question. Why is the "lowest upper bound" criterion so cherished? :-)

I mean: the Real field - even though it nominally satisfies the least upper bound criterion - it doesn't help us approximate the limit of a Specker sequences either. All we are achieving with the real field is, we're pretending that we can operate on this "number" as if we could approximate it, even though we can't.

Why the pretense?

Denis,

In my understanding of recursively enumerable sets, they possess a generation function that algorithmically generates the set.

In contrast, an enumerable set possesses a decision function that will determine, in finite time, whether a particular element is a member of the set. These sets are also known as decidable sets.

Note that a decision function may operate as a generation function by iteration through the universe of candidate elements. However, a generation function can never operate as a decision function. If it could, the concepts of decidable sets and recursively enumerable sets would be equivalent. (The set of decidable sets is a proper subset of the set of recursively enumerable sets).

In the case where the sets in question are infinite subsets of an ordered infinite set, e.g. the reals, it is crucial that the generation function g may generate out of order, as in the dovetail scenario. If this were not the case, for any candidate element r, we could simply wait a finite time for g to produce an element greater than r, and we would know whether r was in the set. Hence g would be functioning as a decision function.

These distinctions are important to understand the Specker sequence that you referenced, because it is based on a set that is recursively enumerable but not decidable. You can think of this particular sequence as starting with an infinite string of zeros, with a binary point to the left of the string. As the sequence progresses, one zero is flipped to a one for each element in the sequence. An algorithm determines which zero to flip next, but does not do so in place order. Hence, there is no way to know whether a particular zero flips, because the algorithm never halts.

This sequence is infinite, bounded above by 1, strictly increasing, but has no supremum. If, on the other hand, there was a decision function for the set, it would be easy to find an N for every positive epsilon, such that the sequence is guaranteed to stay within epsilon for any two elements with indexes greater than N. Hence it would be a Cauchy sequence, not a Specker sequence.

Hope this helps.

Thomas:In my understanding of recursively enumerable sets, they possess a generation function that algorithmically generates the set.Actually, "recursively enumerable" sets are generally defined with a maybe-decide function which returns if a given element is in the set, and does not return otherwise. I haven't been able to identify a recursively enumerable set that is defined directly with a generation function.

Thomas:In contrast, an enumerable set possesses a decision function that will determine, in finite time, whether a particular element is a member of the set. These sets are also known as decidable sets.It seems like that would actually be called a recursive (or decidable) set. I don't recall a definition for a merely "enumerable" set.

Thomas:Note that a decision function may operate as a generation function by iteration through the universe of candidate elements. However, a generation function can never operate as a decision function. If it could, the concepts of decidable sets and recursively enumerable sets would be equivalent. (The set of decidable sets is a proper subset of the set of recursively enumerable sets).

In the case where the sets in question are infinite subsets of an ordered infinite set, e.g. the reals, it is crucial that the generation function g may generate out of order, as in the dovetail scenario. If this were not the case, for any candidate element r, we could simply wait a finite time for g to produce an element greater than r, and we would know whether r was in the set. Hence g would be functioning as a decision function.

Agreed.

Thomas:These distinctions are important to understand the Specker sequence that you referenced, because it is based on a set that is recursively enumerable but not decidable. You can think of this particular sequence as starting with an infinite string of zeros, with a binary point to the left of the string. As the sequence progresses, one zero is flipped to a one for each element in the sequence. An algorithm determines which zero to flip next, but does not do so in place order. Hence, there is no way to know whether a particular zero flips, because the algorithm never halts.Yes.

Thomas:Hope this helps.Thank you. However, as you can see in my previous two comments, I had already come to this understanding at the time.

My remaining question now is, why is the least upper bound criterion so important. :) Why do mathematicians like to pretend that numbers like the supremum of a Specker sequence exist, and that we can do anything with them?

If you're only interested in applied mathematics, which is maybe 95% for all the mathematics people do, then the least upper bound property it not important.

It only comes into play in theoretical mathematics. In the study of real analysis, many fundamental theorems (compactness, connectedness, etc.) are based on the reals being a complete metic space and having the least upper bound propoerty.

The least upper bound property of the reals is also fundamental for order theory.

You say Omega can not be approximated, this is not true.

Despite Omega is not computable, it can be approximated: it is the limit of a computable sequence (like a Specker sequence) as demonstrated by J.P Delahaye.

When I say "approximate", what I mean is that for any value of epsilon, it is possible to quote a number such that the true value does not differ from it by more than epsilon.

This is not possible for Omega.

Well, it's not possible for arbitrarily small epsilon. Depending on the Turing machine specification, it may be possible up to some epsilon, though. So yes, in that sense, it can be approximated, up to that point.

Post a Comment