In the field of analysis, it is common to make a distinction between “hard”, “quantitative”, or “finitary” analysis on one hand, and “soft”, “qualitative”, or “infinitary” analysis on the other. “Hard analysis” is mostly concerned with finite quantities (e.g. the cardinality of finite sets, the measure of bounded sets, the value of convergent integrals, the norm of finite-dimensional vectors, etc.) and their quantitative properties (in particular, upper and lower bounds). “Soft analysis”, on the other hand, tends to deal with more infinitary objects (e.g. sequences, measurable sets and functions, -algebras, Banach spaces, etc.) and their qualitative properties (convergence, boundedness, integrability, completeness, compactness, etc.). To put it more symbolically, hard analysis is the mathematics of , , , and [1]; soft analysis is the mathematics of 0, , , and .
At first glance, the two types of analysis look very different; they deal with different types of objects, ask different types of questions, and seem to use different techniques in their proofs. They even use[2] different axioms of mathematics; the axiom of infinity, the axiom of choice, and the Dedekind completeness axiom for the real numbers are often invoked in soft analysis, but rarely in hard analysis. (As a consequence, there are occasionally some finitary results that can be proven easily by soft analysis but are in fact impossible to prove via hard analysis methods; the Paris-Harrington theorem gives a famous example.) Because of all these differences, it is common for analysts to specialise in only one of the two types of analysis. For instance, as a general rule (and with notable exceptions), discrete mathematicians, computer scientists, real-variable harmonic analysts, and analytic number theorists tend to rely on “hard analysis” tools, whereas functional analysts, operator algebraists, abstract harmonic analysts, and ergodic theorists tend to rely on “soft analysis” tools. (PDE is an interesting intermediate case in which both types of analysis are popular and useful, though many practitioners of PDE still prefer to primarily use just one of the two types. Another interesting transition occurs on the interface between point-set topology, which largely uses soft analysis, and metric geometry, which largely uses hard analysis. Also, the ineffective bounds which crop up from time to time in analytic number theory are a sort of hybrid of hard and soft analysis. Finally, there are examples of evolution of a field from soft analysis to hard (e.g. Banach space geometry) or vice versa (e.g. recent developments in extremal combinatorics, particularly in relation to the regularity lemma).)
It is fairly well known that the results obtained by hard and soft analysis respectively can be connected to each other by various “correspondence principles” or “compactness principles”. It is however my belief that the relationship between the two types of analysis is in fact much closer[3] than just this; in many cases, qualitative analysis can be viewed as a convenient abstraction of quantitative analysis, in which the precise dependencies between various finite quantities has been efficiently concealed from view by use of infinitary notation. Conversely, quantitative analysis can often be viewed as a more precise and detailed refinement of qualitative analysis. Furthermore, a method from hard analysis often has some analogue in soft analysis and vice versa, though the language and notation of the analogue may look completely different from that of the original. I therefore feel that it is often profitable for a practitioner of one type of analysis to learn about the other, as they both offer their own strengths, weaknesses, and intuition, and knowledge of one gives more insight[4] into the workings of the other. I wish to illustrate this point here using a simple but not terribly well known result, which I shall call the “finite convergence principle” (thanks to Ben Green for suggesting this name; Jennifer Chayes has also suggested the “metastability principle”). It is the finitary analogue of an utterly trivial infinitary result – namely, that every bounded monotone sequence converges – but sometimes, a careful analysis of a trivial result can be surprisingly revealing, as I hope to demonstrate here.
Before I discuss this principle, let me first present an informal, incomplete, and inaccurate “dictionary” between soft and hard analysis, to try to give a rough idea of the (partial) correspondences between the two:
Soft analysis | Hard analysis |
x finite | x bounded (e.g. x = O(1)) |
x vanishes | x small (e.g. ) |
x infinite | x large (e.g. ) |
Quantitative decay bound (e.g. ) | |
is convergent | is metastable (see below) |
f uniformly continuous | Lipschitz or Hölder bound on f (e.g. ) |
E compact | Metric entropy bound on E |
E is Lebesgue measurable | E is (quantitatively) approximated by bounded complexity sets |
V is generated by S | V is an algorithm initialised by S |
u locally extremises F(u) | u has no nearby competitor with significantly better value of F |
One can draw two conclusions from this table:
- Soft analysis statements can often be stated both succinctly and rigorously, by using precisely defined and useful concepts (e.g. compactness, measurability, etc.). In hard analysis, one usually has to sacrifice one or the other: either one is rigorous but verbose (using lots of parameters such as , N, etc.), or succinct but “fuzzy” (using intuitive but vaguely defined concepts such as “size”, “complexity”, “nearby”, etc.).
- A single concept in soft analysis can have multiple hard analysis counterparts; a “naive” translation of a statement in soft analysis into hard analysis may be incorrect. (In particular, one should not use the above table blindly to convert from one to the other.)
Anyway, back to the finite convergence principle. The infinite convergence principle is well known, though perhaps not by this name, to anyone who has taken undergraduate analysis:
Infinite convergence principle. Every bounded monotone sequence of real numbers is convergent.
This basic principle – essentially equivalent to the Dedekind completeness axiom for the real numbers – is of course fundamental to many parts of infinitary analysis, most obviously in the theory of infinite sequences and series, but it also is implicit in just about any context in which one studies an “infinite” object (e.g. an infinite-dimensional vector space) by expressing it as a monotone limit of “finite” objects. It is undoubtedly an important tool in soft analysis. What, then, is its counterpart in hard analysis?
We will answer this question presently, but let us first make the infinite convergence principle a bit more quantitative. We may as well normalise the bounded sequence to lie between 0 and 1. Expanding out the “epsilon-delta” definition of convergence, we obtain
Infinite convergence principle (again). If , then there exists a real number x such that for every , there exists an N such that for all .
There are quite a lot of quantifiers here. One can cut down the complexity a little bit by replacing the notion of a convergent sequence with that of a Cauchy sequence. This lets us eliminate the need for a limit x, which does not have an obvious finitary counterpart. So we get
Infinite convergence principle (yet again). If and , there exists an N such that for all .
Note now that one does not need the real number system to make this principle both meaningful and non-trivial; the principle already works quite well when restricted to the rationals. (Exercise: prove this principle for the rationals without constructing the real number system.) Informally speaking, this principle asserts that every bounded monotone sequence is eventually stable up to error .
Now let’s try to find the finitary (quantitative) equivalent of this principle. The most naive thing to do is simply to replace the infinite sequence by a finite sequence, thus
Finite convergence principle (first attempt). If and , there exists an N such that for all .
But this is trivially true; one can simply set N equal to M (or any number larger than M). So one needs to strengthen the claim. What about making N be independent of M, and only dependent on ?
Finite convergence principle (second attempt). If and , there exists an depending only on such that for all .
But this is trivially false; consider for instance a sequence which equals zero except at i=M, at which point we jump up to . We are not going to get the Cauchy property unless we set N to be as large as M… but we can’t do that if we only want N to depend on .
So, is there anything non-trivial that one can say at all about finite bounded monotone sequences? Well, we have the pigeonhole principle:
Pigeonhole principle. If and is such that , there exists an such that .
Indeed, if the gaps between each element of the sequence and the next were always larger than , then would exceed , a contradiction. This principle is true, but it is too weak to be considered a true finitary version of the infinite convergence principle; indeed, we see that the pigeonhole principle easily implies
Weak infinite convergence principle. If , then .
but does not obviously imply the full infinite convergence principle.
The problem is that the pigeonhole principle only establishes instantaneous stability of the sequence at some point n, whereas the infinite convergence principle concludes the permanent stability of the sequence after some point N. To get a better finitary match to the infinite convergence principle, we need to extend the region of stability that the pigeonhole principle offers. Now, one can do some trivial extensions such as
Pigeonhole principle (second version). If and and is such that , there exists such that for all ,
which one can quickly deduce from the first pigeonhole principle by considering the sparsified sequence . But this is only a little bit better, as it now gives the infinitary statement
Slightly less weak infinite convergence principle. If , then for all k,
but is still not strong enough to imply the infinite convergence principle in its full strength. Nevertheless, it shows that we can extend the realm of stability offered by the pigeonhole principle. One can for instance sparsify further, replacing n+k with 2n:
Pigeonhole principle (third version). If and and is such that , there exists such that for all .
This can be proven by applying the first version of the pigeonhole principle to the sparsified sequence . This corresponds to an infinite convergence principle in which the conclusion is that .
One can of course keep doing this, achieving various sparsified versions of the pigeonhole principle which each capture part of the infinite convergence principle. To get the full infinite convergence principle, one cannot use any single such sparsified version of the pigeonhole principle, but instead must take all of them at once. This is the full strength of the finite convergence principle:
Finite convergence principle. If and is a function and is such that is sufficiently large depending on F and , then there exists such that for all .
This principle is easily proven by appealing to the first pigeonhole principle with the sparsified sequence where the indices are defined recursively by and . This gives an explicit bound on M as . Note that the first pigeonhole principle corresponds to the case , the second pigeonhole principle to the case , and the third to the case . A particularly useful case for applications is when F grows exponentially in N, in which case M grows tower-exponentially in .
Informally, the above principle asserts that any sufficiently long (but finite) bounded monotone sequence will experience arbitrarily high-quality amounts of metastability with a specified error tolerance , in which the duration F(N) of the metastability exceeds the time N of onset of the metastability by an arbitrary function F which is specified in advance.
Let us now convince ourselves that this is the true finitary version of the infinite convergence principle, by deducing them from each other:
The finite convergence principle implies the infinite convergence principle. Suppose for contradiction that the infinite convergence principle failed. Untangling the quantifiers, this asserts that there is an infinite sequence and an with the property that, given any positive integer N, there exists a larger integer such that . But this contradicts the finite convergence principle.
The infinite convergence principle implies the finite convergence principle. Suppose for contradiction that the finite convergence principle failed. Untangling the quantifiers, this asserts that there exists and a function F, together with a collection of bounded monotone sequences whose length goes to infinity, such that for each one of these sequences, there does not exist such that for all . Let us extend each of the finite bounded sequences to infinite bounded sequences in some arbitrary manner, e.g. defining whenever . The space of all bounded sequences is well-known[5] to be sequentially compact in the product topology, thus after refining the i labels to a subsequence if necessary, we can assume that the sequences converge in the product topology (i.e. pointwise) to a new limit sequence . Since each of the original sequences were bounded in the interval [0,1] and monotone, we see that the limit sequence is also. Furthermore, we claim that there does not exist any for which for all . Indeed, if this were the case, then by pointwise convergence we would also have for all and all sufficiently large i, but this contradicts the construction of the . But now we see that this infinite bounded monotone sequence contradicts the infinite convergence principle.
One can draw some morals from the above discussion:
- The finitary version of an infinitary statement can be significantly more verbose and ugly-looking than the infinitary original, and the arrangement of quantifiers becomes crucial.
- The “naive” finitisation of an infinitary statement is often not the correct one.
- While the finitary version of an infinitary statement is indeed quantitative, the bounds obtained can be quite poor (e.g. tower-exponential or worse).
- The deduction of the infinitary statement from the finitary one is quite short, as long as one is willing to work indirectly (arguing by contradiction).
- The deduction of the finitary statement from the infinitary one is a bit more complicated, but still straightforward, and relies primarily on compactness.
- In particular, the equivalence of the finitary and infinitary formulations requires a non-trivial amount of infinitary mathematics (though in this particular case, we can at least leave the ultrafilters out of it).
These morals apply not only to the finite and infinite convergence principle, but to many other pairs of finitary and infinitary statements, for instance Szemerédi’s theorem on one hand and the Furstenberg recurrence theorem on the other, as briefly discussed in my Simons lecture on these topics. (In these contexts, the correspondence between the finitary and infinitary statements is known as the Furstenberg correspondence principle.)
— Applications —
So, we’ve now extracted a quantitative finitary equivalent of the infinitary principle that every bounded monotone sequence converges. But can we actually use this finite convergence principle for some non-trivial finitary application? The answer is a definite yes: the finite convergence principle (implicitly) underlies the famous Szemerédi regularity lemma, which is a major tool in graph theory, and also underlies some rather less well known regularity lemmas, such as the arithmetic regularity lemma of Green. More generally, this principle seems to often arise in any finitary application in which tower-exponential bounds are inevitably involved.
Before plunging into these applications, let us first establish a Hilbert space version[6] of the convergence principle. Given a (closed) subspace X of a Hilbert space H, and a vector , let be the orthogonal projection from v onto X. If X is finite dimensional, then this projection can be defined in a finitary way, for instance by using the Gram-Schmidt orthogonalisation procedure to X. If X is infinite dimensional, then even the existence of the orthogonal projection is not completely trivial, and in fact relies ultimately on the infinite convergence principle. Closely related to the existence of this projection is the following monotone continuity property:
Hilbert space infinite convergence principle. Let be a nested sequence of subspaces of a Hilbert space H, and let be the monotone closed limit of the . Then for any vector v, converges strongly in H to .
As with the infinite convergence principle in [0,1], there is a Cauchy sequence version which already captures the bulk of the content:
Hilbert space infinite convergence principle (again). Let be a nested sequence of subspaces of a Hilbert space H, and let . Then for any vector v there exists N such that for all .
One can deduce this principle from the analogous principle in [0,1] by first normalising , and then observing from Pythagoras’ theorem that (which one should view as the energy of as measured relative to v) is a bounded monotone sequence from 0 to 1. Applying the infinite convergence principle, followed by Pythagoras’ theorem yet again, we obtain the claim. Once one sees this, one immediately concludes that there is also a finitary equivalent:
Hilbert space finite convergence principle. If and , and is such that M is sufficiently large depending on F and , then for any vector v with there exists such that for all .
Informally, given a long enough sequence of nested subspaces, and a given bounded vector v, one can find an arbitrarily good region of metastability in the orthogonal projections of v into these subspaces.
From this principle one can then quickly deduce the Szemerédi regularity lemma as follows. Let G = (V,E) be a graph. One can think of the adjacency matrix of this graph as an element of the (finite-dimensional) Hilbert space , where the product space is given normalised counting measure (and the discrete sigma-algebra ). We can construct a nested sequence of -algebras in (which one can think of as a sequence of increasingly fine partitions of V), together with the attendant sequence of subspaces (this corresponds to functions on which are constant on any product of pair of cells in the partition), by the following greedy algorithm[7]:
- Initialise to be the trivial -algebra (i.e. the trivial partition).
- Given , let be the orthogonal projection of to the space (thus the value on any product of cells is just the edge density between that pair of cells), and let be the deviation of the graph from its density.
- Let be sets in V which maximise the discrepancy .
- Let be the -algebra generated by and . Now increment n by n+1 and return to Step 2.
Let and be a function. Applying the Hilbert space finite convergence principle to the above sequence of vector spaces, we obtain some N with some bounded size (depending on and F) such that
(*)
for all . By a further application of the pigeonhole principle (for Hilbert spaces), one can find such that
.
What this basically means is that the partition is very regular, in that even the greediest way to refine this partition does not significantly capture any more of the fluctuations of the graph G. By choosing F to be a suitable exponentially growing function, one can make the regularity of this partition exceed the number of cells (which is basically ) in the partition , which is “within epsilon” of the partition in the sense of (*). Putting all this together, one can get a strong version of the Szemerédi regularity lemma, which implies the usual formulation by a simple argument; see my paper on this topic for further discussion. The choice of F being exponential is what results in the notorious tower-exponential bounds in this regularity lemma (which are necessary, thanks to a result of Gowers). But one can reduce F to, say, a polynomial, resulting in more civilised bounds but with a weaker regularity conclusion. Such a “weak regularity lemma” was for instance established by Frieze and Kannan, and also underlies the “generalised Koopman von Neumann theorem” which is a key component of my result with Ben Green establishing long arithmetic progressions in the primes. In the opposite direction, various flavours of “strong regularity lemma” have appeared in the literature, for instance in this paper of Alon and Shapira, and also turn out to be convenient ways to formulate hypergraph versions of the regularity lemma of adequate strength to imply non-trivial theorems (such as Szemerédi’s theorem).
Rather than using sets which maximise discrepancy, one can also use sublevel sets of the eigenvectors of the adjacency matrix corresponding to the largest eigenvalues of the matrix to generate the partition; see this paper of Frieze and Kannan for details of a closely related construction.
The appearance of spectral theory (eigenvalues and eigenvectors) into this topic brings one in contact with Fourier analysis, especially if one considers circulant matrices (which correspond in graph-theoretic terms to Cayley graphs on a cyclic group). This leads us towards the arithmetic regularity lemma of Green, which regularises a bounded function f on a finite abelian group G in terms of a partition generated by the sublevel sets (Bohr sets) of a bounded number of characters; the precise formulation is a bit lengthy to state properly, although it simplifies substantially in the “finite field model” case when G is a vector space over a small finite field (e.g. ). This arithmetic regularity lemma can also be established using the finite convergence principle (in either the numerical form or the Hilbert space form). Indeed, if we let and let be the vector space generated by the characters associated to the n largest Fourier coefficients of f, then by applying the finite convergence principle (with v = f) we can locate a metastable region, where there is not much going on (in an sense) between and for some (exponentially growing) function F, thus there is a “spectral gap” of sorts between the N largest Fourier coefficients and the coefficients ranked N+F(N) and beyond. The sublevel sets of characters associated to the N largest coefficients can then be used to regularise the original function f. Similar ideas also appear in an old paper of Bourgain, and in a paper of Green and Konyagin.
— A non-finitisable statement —
One may think from the above discussion that every infinitary statement concerning, say, the natural numbers, has a finitary analogue which is equivalent to it. It turns out that this is not quite the case (and in fact some subtle issues in logic come in), even for very “natural” and “non-pathological” infinitary statements. In particular, the following innocuous-looking infinitary statement is basically non-finitisable:
Infinite pigeonhole principle. If the natural numbers are divided into finitely many colour classes, then one of the colour classes is infinite.
This principle is of course a triviality in the infinitary world, and there are some obvious candidates for finitary versions (e.g. the finite pigeonhole principle), but they are not equivalent to the infinitary principle the same way that the finite convergence principle is equivalent to the infinite convergence principle; there is a failure of compactness here. There is a finitary version (of sorts), but it is somewhat difficult to state. Define a set function to be any function f which takes a finite set A of natural numbers as input, and returns a natural number F(A) as output. Let us say that a set function F is asymptotically stable near infinite sets[8] if, given any sequence of finite sets of natural numbers which converges to an infinite set in the sense that for any finite set I, for all sufficiently large n, the numbers are constant for sufficiently large n. For instance, the “least element” set function (adopting the non-standard convention that the infimum of the empty set is 0) is asymptotically stable, but the “cardinality” set function is not. Anyway, the correct “finitary” version of the infinite pigeonhole principle is
“Finitary” infinite pigeonhole principle. Let F be an set function that is asymptotically stable near infinite sets, and let k be a positive integer. Then there exists a positive integer N with the property that whenever the set {1,…,N} is divided into k colour classes, at least one of the colour classes A has the property that |A| > F(A).
It is a good exercise to see that this principle is equivalent to the infinite pigeonhole principle. Note that the plain old pigeonhole principle essentially corresponds to the case when F is a constant function – and is thus definitely a very special case. The case when for some fixed function f is already very interesting – it is the 1-uniform case of a “strong Ramsey theorem”and is barely provable by finitary means (try it, say for k=10 and . What quantitative bound for N do you get?), although the general case of that theorem is not (but does follow easily from the above principle), thanks to the Paris-Harrington theorem. The assumption of asymptotic stability of F near infinite sets is necessary, as one can see by considering the counterexample F(A) := |A|.
I am enclosing “finitary” in quotes because while most of the assertion of this principle is finitary, one part still is not, which is the notion of “asymptotically stable near infinite sets”. This is a notion which cannot be precisely formulated in a purely finitary manner, even though the notion of a set function is basically a finitary concept (ignoring for now a subtle issue about what “function” means). If one insists on working in a finitary setting, then one can recast the infinite pigeonhole principle as a schema of finitary principles, one for each set function F that is asymptotically stable near infinite sets, but in order to work out exactly which set functions are asymptotically stable near infinite sets or not requires infinitary mathematics. (And for some (constructible, well-defined) set functions, the asymptotic stability near infinite sets is undecidable; this fact is closely related to the undecidability of the halting problem and is left as an exercise to the reader.)
The topic of exactly which statements in infinitary mathematics are “truly infinitary” is a fascinating one, and is basically a question in reverse mathematics, but we will not be able to discuss it here.
(I am indebted to Harvey Friedman for discussions on the Paris-Harrington theorem and the infinite pigeonhole principle.)
— Footnotes —
(N.B. Footnotes were created following the suggestions from this post by santoki.)
[1] One can subdivide hard analysis into further subcategories by inspecting what kind of inequalities are used. There is “exact hard analysis” where one really uses ; “quasi-exact hard analysis” in which one is willing to lose absolute constants (and so one sees notation such as O(), , or ); “logarithmically coarse hard analysis” in which one is willing to lose quantities such as which are “logarithmic” in some key parameter N; and “polynomially coarse hard analysis” in which one is willing to lose quantities such as which are polynomial in key parameters. Finally, there is “coarse analysis” in which one is willing to lose arbitrary functions of key parameters. The relationships between these flavours of hard analysis are interesting, but will have to wait to be discussed elsewhere.
[2] One can use these axioms to make finer distinctions, for instance “strongly finitary” analysis, in which one is not even willing to use real numbers, but instead only works with finite complexity numbers (e.g. rationals), and “strongly infinitary” analysis, in which one freely uses the axiom of choice (or related concepts such as ultrafilters). There are also hybrids between finitary and infinitary analysis, such as “pre-infinitary” analysis, in which one takes sequences of increasingly large or complex objects, and uses phrases such as “passing to a subsequence if necessary” a lot, but does not actually “jump to the limit”; we also have “pseudo-finitary” analysis, of which non-standard analysis is the most prominent example, in which infinitary methods are re-expressed using infinitesimals or other pseudo-finitary objects. Again, these distinctions will have to wait to be discussed elsewhere.
[3] There are rigorous results from proof theory, such as Herbrand’s theorem, which can allow one to automatically convert certain types of qualitative arguments into quantitative ones, but there appears to be only a small amount of literature applying these general results in logic to actual soft analysis arguments. (See the comments to this post for some recent developments in this area.)
[4] For instance, in my result with Ben Green establishing arbitrarily long arithmetic progressions of primes, the argument was (necessarily) finitary in nature, but it was absolutely essential for us to be aware of the infinitary arguments and intuition that had been developed in ergodic theory, as we had to adapt such arguments to the finitary setting in order to conclude our proof, and it would have far less evident how to discover such arguments if we were always restricted to looking at finitary settings. In general, it seems that infinitary methods are good for “long-range” mathematics, as by ignoring all quantitative issues one can move more rapidly to uncover qualitatively new kinds of results, whereas finitary methods are good for “short-range” mathematics, in which existing “soft” results are refined and understood much better via the process of making them increasingly sharp, precise, and quantitative. I feel therefore that these two methods are complementary, and are both important to deepening our understanding of mathematics as a whole.
[5] This looks like Tychonoff’s theorem, but because we require sequential compactness here rather than topological compactness, the result here is in fact much closer in spirit to the Arzelà-Ascoli theorem. In particular, the axiom of choice is not actually used here, instead one repeatedly uses the Bolzano-Weierstrass theorem for the interval [0,1] followed by a diagonalisation argument. The astute reader here will observe that the Bolzano-Weierstrass theorem is essentially equivalent to the infinite convergence principle! Fortunately, there is no circularity here, because we are only using this theorem in order to deduce the finite convergence principle from the infinite, and not the other way around.
[6] One could also view this as a “noncommutative” or “quantum” version of the convergence principle, but this is somewhat of an abuse of terminology, despite the presence of the Hilbert space, since we don’t actually have any noncommutativity or any other quantum weirdness going on.
[7] One can also replace this greedy algorithm by a random algorithm, in which each is obtained from by adding a neighbourhood of a randomly chosen vertex in V. This use of randomisation appears in the infinitary setting in this paper of myself, and in the finitary setting in this paper of Ishigami.
[8] This notation is very roughly equivalent to the notion of a function F(A) defined for all sets of integers (both finite and infinite) which can always be “computed” in “finite time” if the set is infinite. But one should take this informal definition with a large grain of salt: while there is indeed an algorithm for computing F(A) for any given set A which will eventually give the right answer, you might not be able to tell when the algorithm has finished! A good example is the function which is asymptotically stable near infinite sets: you can “compute” this function for any set A by initialising the answer to 0, running a counter n from 0 to infinity, and resetting the answer permanently to n the first time n lies in A. As long as A is non-empty, this algorithm terminates in finite time with the correct answer; if A is empty, the algorithm gives the right answer from the beginning, but you can never be sure of this fact! In contrast, the cardinality |A| of a possibly infinite set A cannot be computed even in this rather unsatisfactory sense of having a running “provisional answer” which is guaranteed to eventually be correct.
[Update, May 24: Typos corrected. A distinction drawn between the strong Ramsey theorem (a theorem that can be stated in finitary mathematics), and the Paris-Harrington theorem (which shows that the strong Ramsey theorem is unprovable by finitary means). Acknowledgment to Harvey Friedman added, as well as a link to reverse mathematics.]
[Update, May 25: functional analysis changed to operator algebras. Footnote [3] modified in view of recent comments.]
[Update, May 31: corrected to “ when A is non-empty, and 0 if A is empty”.]
[Update, Aug 19, 2008: Definition of asymptotic stability corrected; thanks to Jaime Gaspar for pointing out the problem.]
65 comments
Comments feed for this article
24 May, 2007 at 3:27 am
thomas1111
That’s a fascinating post! I was wondering if this had any chance of appearing in book form one day (e.g. in a second edition of your undergraduate books on analysis)?
(Btw, a few trivial typos: 5. should read “The deduction of the finitary statement…”, and the ‘latex’ command is misplaced in the Hilbert space infinitary convergence principle, and missing in a paragraph a little later.)
24 May, 2007 at 5:59 am
Michael Kinyon
A very well-written and insightful essay. You should consider submitting this to the Monthly, if you haven’t already.
24 May, 2007 at 8:18 am
Terence Tao
Thanks for the suggestions! Actually, Ben Green and I are planning to write a paper on the finite convergence principle and the philosophy surrounding it, once we have collected a few more applications of this principle.
24 May, 2007 at 9:21 am
richard borcherds
I suggest that hard analysis is essentially first order (Peano) arithmetic, while soft analysis is
second order arithmetic; in other words, hard analysis proofs are those that can be encoded in Peano arithmetic. Harvey Friedman has done a lot of work on what happens when you try to convert a proof in second order arithmetic into one in first order arithmetic: even when this is possible, the proof can get MUCH longer.
Friedman has also investigated several subsystems of second order arithmetic, which might be related to the different subsystems of soft analysis mentioned in the article.
24 May, 2007 at 9:33 am
Terence Tao
Dear Richard,
Thanks for your comment! I think you are essentially correct in this. There seems to be a vague correspondence between how strong a logic one is willing to use, and how rapidly growing a function from the natural numbers to itself that one is willing to use (e.g. polynomial, exponential, primitive recursive, Ackermann, non-computable, ineffective, etc.) The latter is of course already very similar to the quantitative/qualitative distinction in analysis.
Incidentally, Harvey was the one who pointed me towards the Paris-Harrington theorem and the non-finitisability of the infinite pigeonhole principle; I had neglected to attribute this and will correct the post accordingly. I’m also adding a pointer to the field of reverse mathematics, which is very relevant to these issues and to which Harvey has contributed quite a lot.
24 May, 2007 at 4:00 pm
Top Posts « WordPress.com
[…] Soft analysis, hard analysis, and the finite convergence principle In the field of analysis, it is common to make a distinction between “hard”, “quantitative”, or […] […]
24 May, 2007 at 7:47 pm
Mark Meckes
You name functional analysis as one field which relies primarily on soft analysis tools. However, the spectrum of problems studied by modern functional analysts draws on both hard and soft tools to various extents; the flavor of the distinctions you discuss and finer distinctions of footnote [1] are familiar just within the geometry of Banach spaces. On the soft analysis end is the isomorphic theory of infinite-dimensional Banach spaces, which is the most classical part of the field. On the hard analysis end is the isometric theory of infinite- or finite-dimensional Banach spaces. In between are what are called the isomorphic and almost-isometric theories of finite-dimensional Banach spaces, which can be characterized in your terminology as “quasi-exact hard analysis”, with the almost-isometric theory being harder (or more exact) than the isomorphic theory. For a discussion of the latter two theories, see for example Bo’az Klartag’s 2006 ICM talk.
25 May, 2007 at 6:44 am
Terence Tao
Dear Mark,
Thanks for your comments! I guess I managed to cause some misunderstanding; here at UCLA “functional analysis” usually gets identified with “operator algebras” (e.g. von Neumann algebras) rather than “Banach space geometry”, due to the nature of our research group in that area; I’ll correct the post accordingly. Actually, Banach space geometry is indeed a nice example of an evolution of a field from soft infinitary analysis towards hard finitary analysis. (In the converse direction, extremal combinatorics has been firmly in the “hard analysis” camp until very recently, when signs of soft analysis have begun appearing.)
25 May, 2007 at 8:20 am
Henry Towsner
The finitary convergence principle is an instance of a proof-theoretic technique called the Dialectica translation. Given an arbitrary formula, the Dialectica translation produces an equivalent formula (possibly involving higher types) with the property that, given a proof (in a suitable logical system–say, Peano Arithmetic) of the original formula, there is a constructive proof of the translated formula.
The key property is that Pi-0-2 (“forall x there exists a y such that P(x,y)” where P(x,y) is computable) formulas are unchanged. So given a proof of a Pi-0-2 formula using infinitary methods that give no quantitative information, the Dialectica translation describes how to extract a constructive proof with quantitative bounds. Not every proof has this property–it depends on the exact logical system the proof is in–but there’s a fair amount of work, especially by Ulrich Kohlenbach, on exactly how far the translation can be pushed.
For a long time, there were a handful of specific applications (Luckhardt to the Thue-Sigel-Roth theorem, Girard to van der Waerden’s theorem, various results by Bishop and his associates), but not many people systematically working at the applications of those techniques, but that’s changed in the last ten or fifteen years. The biggest group in that area is Kohlenbach and his students, who have a produced a number of results working by applying these techniques to soft analytic proofs (mainly in fixed point theory). (I may as well plug my own work–Jeremy Avigad and I recently started trying to produce hard analytic versions of certain soft analytic proofs in ergodic theory, and, together with Philipp Gerhardy, just finished a paper with a hard analytic version of the ergodic theorem which should be online in the next week or two.)
25 May, 2007 at 11:44 am
Ulrich Kohlenbach
As my work has been mentioned in Henry Towsner’s mail, I like to give a few pointers to my papers:
The “finite convergence principle” as discussed by Terence Tao features
prominently e.g. in my papers
1) Some computational aspects of metric fixed point theory. Nonlinear
Analysis vol.61, no.5, pp.823-837 (2005)
2) Bounds on iterations of asymptotically quasi-nonexpansive mappings
(with Branimir Lambov). In: Falset, Fuster, Sims (eds.), Proc.
International Conference on Fixed Point Theory and Applications,
Valencia 2003, pp. 143-172, Yokohama Publishers (2004).
A general survey is in my paper “Effective bounds from proofs in abstract
functional analysis”. In: Cooper, Loewe, Sorbi (eds.), CiE 2005. New Computational Paradigms. Springer. To appear 2007 (available from my homepage, see theorems 22 and 30).
The new paper by Avigad, Gerhardy and Towsner provides another nice
use of this concept and the logical methodology sketched below.
There also is another paper by myself and Laurentiu Leustean using
this concept coming up soon.
The finite convergence principle is exactly the so-called Herbrand normal
form or no-counterexample interpretation (G. Kreisel) of (one formulation of) the usual Cauchy statement which in this simple case at hand also
coincides with the Goedel functional interpretation (of the Goedel negative
translation of) the Cauchy property (as pointed out by Henry Towsner). For more complicated statements Goedel’s interpretation differs from the Herbrand normal form in order to achieve a much better logical behavior compared to the Herbrand normal form on the expense of using functionals in higher function spaces (rather than just number theoretic functions M). In fact, proofs can be logically analyzed by transforming
every statement in the proof (no matter what logical form they have) into
the Goedel interpretation to guarantee an effective witness for it
(i.e. an effective bound on the finite convergence version in the case the conclusion is a convergence statement). This applies to strong formal
systems in which the bulk of ordinary analysis can be formalized.
The “finitary” infinite pigeonhole principle is just another instance of the
aforementioned interpretation schema.
General background information on Goedel’s method and its modern
refinements necessary to make it applicable in analysis can be found in my invited chapter for the Goedel Centenary Volume
“Goedel’s functional interpretation and its use in current mathematics”
(available on my homepage) as well as my forthcoming book
“Applied Proof Theory: Proof Interpretations and their use in Mathematics”
which I am currently finishing for Springer Monographs in Mathematics
(approx. 500pp., expected to appear in 2008)
See also my joint survey with Paulo Oliva “Proof mining: a systematic
way of analysing proofs in analysis”, Proc. Steklov Inst. Math. vol 242,pp.
136-164 (2003).
25 May, 2007 at 12:05 pm
Terence Tao
Dear Henry and Ulrich,
Thank you for bringing these recent developments in proof theory to my attention (and to the attention of other readers, of course). I was aware of Girard’s work on finitising the Furstenberg-Weiss proof of van der Waerden’s theorem, but not on later developments, and I am happy to hear that there is active work in this direction. In particular, the issue of finitising the work of Furstenberg and his school in ergodic theory seems to be particularly relevant to current developments in combinatorics and number theory.
Hopefully there will be a bit more dialogue between the hard analysts, soft analysts, and proof theorists, in the future :-) .
25 May, 2007 at 2:57 pm
JL
As you mention in footnote [1], us “hard analysts” often distinguish between different types of growth and consider them qualitatively different (O(1) vs. polylog(n) vs. polynomial, etc.) While there is often a hard soft correspondence (usually via some type of compactness) when the quantitative growth can be considered O(1) with respect to some parameter, there are far fewer examples where, for instance, polynomial growth manifests itself as infinitary stability, while super-polynomial growth does not (i.e. “soft” conclusions that correspond only to certain types of super-constant bounds). A notable exception is Gromov’s theorem on groups of polynomial growth.
Mike Freedman suggested a few years ago that maybe such a correspondence could be used to elucidate the P vs. NP question. His basic idea is that problems in P should have decidable limits, while hard problems (e.g. SATisfiability) should become undecidable in the limit (whatever “limit” means).
25 May, 2007 at 3:28 pm
Terence Tao
Dear JL,
Thanks for the comment, and for the pointer to Freedman’s work! I have also felt that the “polynomially quantitative” category of hard analysis should be equivalent to either some sort of soft analysis in the “polynomial limit”, or to some sort of restricted “polynomial logic” in which certain operations (notably excessive use of induction, which can easily lead to exponential growth or worse) are prohibited. Certainly there are many polynomially quantitative arguments in which one can basically just glance at the form of the argument and say “OK, there is nothing really iterative happening here, thus I am convinced that the argument will necessarily output a bound of polynomial type” – but with our current mathematical tools we still have to verify the polynomial growth by tedious calculation. I have even tried once or twice to construct an infinitary limit object by brute force (e.g. considering sequences of finitary objects tending to infinity, quotiented by the equivalence relation of being equivalent up to polynomial losses, and maybe throwing in an ultrafilter to force limits to exist) but nothing particularly interesting seems to emerge at the other end of the process.
What one really wants to see is some rich infinitary theory come into the picture, similar to how the theory of Lie groups enters in Gromov’s limit of groups of polynomial growth. But the polynomial infinitary limit – whatever it is – seems strangely “incomplete” (due to the forbidden nature of induction) and so much of the power of soft analysis seems to be lost (many of the really deep theorems in soft analysis only work on complete metric spaces, such as Banach or Hilbert spaces). [It may be that one needs to “compactify” the infinitary limit somehow, creating new infinitary objects exhibiting some sort of weird “virtual” polynomial growth; but this is of course extremely speculative.]
One possible concrete area where it may be reasonable to implement such a soft theory is the recent developments in sum-product theorems, as developed primarily by Bourgain and his co-authors. Here, the whole aim of the game is to extract a power gain in something like a sum set or product set, where p is the cardinality of the underlying field and one doesn’t care at all exactly what is so long as it is positive and independent of p. As one would expect from Bourgain, all the papers are written in the “hard analysis” style, but I feel these arguments have some inherent softness to them (because the exact value of all the s involved play virtually no role whatsoever in the argument) and one might be able to clean up the arguments and reduce a lot of epsilon-bookkeeping by moving to a softer approach.
25 May, 2007 at 5:12 pm
Terence Tao
Perhaps in the spirit of Gromov’s theorem, I should rename “polynomial logic” as “nilpotent logic”; some iteration / use of induction is allowed (e.g., given that has polynomial growth, to deduce that has polynomial growth), but there has to be some finite depth to this iteration to prevent exponential growth from occurring. (There seems to be a vague analogy here with the concept of primitive recursive functions in computability theory, though it doesn’t seem to be exactly the same thing.)
25 May, 2007 at 6:33 pm
Open question: effective Skolem-Mahler-Lech theorem « What’s new
[…] #1, there are often ways to make these arguments quantitative and effective, as discussed in my previous post. But #2 and #3 seem to be irreducibly ineffective: if you know that a set A has finite cardinality […]
25 May, 2007 at 8:44 pm
Walt
I think results like Gromov’s are atypical examples of polynomially-bounded objects. I would argue that finitely-presented groups are “naturally” either finite or of exponential growth, which makes polynomial growth a strong condition to impose, one that sharply limits the structure of the group. I would guess this is something peculiar to an equationally-presented theory such as the theory of groups, and is not something you would see very often in more combinatorial settings.
26 May, 2007 at 3:05 am
Mark Meckes
Terry,
I know you’re well aware of the quantitative side of Banach space geometry; I just wanted to take the oppurtunity to advertise a field I like. To continue on that note, I could add that there are parts of the theory of operator algebras that, via free probability theory and random matrices, involve some very hard analysis indeed. Of course I can’t take issue with your statement that operator algebraists “tend to” use softer tools.
On a different note, can you say some more, or give some references to the appearance of soft analysis in extremal combinatorics that you mentioned?
26 May, 2007 at 7:56 am
Terence Tao
Dear Mark,
This is a good point about free probability/random matrices. (Also, there is a lot of recent work on getting usefully quantitative versions of Kazhdan property T.) I guess I should be a little more cautious in designating camps for various fields; nowadays all the techniques are slowly getting blended together, which I think is a very healthy thing for analysis as a whole.
As I mentioned in my main post, the Szemeredi regularity lemma in graph theory (which has some bearing on extremal graph theory problems, as mentioned in my post on the triangle-diamond problem) can be proven by a finitary version of an infinitary statement (the infinite convergence principle), which then suggests that the regularity lemma itself should have an infinitary formulation. Recently, a couple proposals for this have appeared, notably the theory of limits of sequences of increasingly large graphs and hypergraphs by Lovasz-Szegedy (dubbed “graphons”), and similar work by Borgs, Chayes, Lovasz, Sos, and Vesztergombi, and also in a more “ergodic theory” approach (based on a hypergraph analogue of the Furstenberg correspondence principle) by myself; also, Lovasz-Szegedy and Bollobas have observed that the regularity lemma has meaningful applications to infinitary settings, for instance establishing compactness of various classes of operators in certain topologies. Infinitary approaches (e.g. ultrafilters, especially idempotent ultrafilters) have also played an important role in Ramsey theory for some time now, and of course Szemeredi’s theorem itself has a fully infinitary proof (by ergodic theory) and a pre-infinitary proof (Szemeredi’s original proof involved lots of sequences of things going to infinity).
26 May, 2007 at 8:08 am
Terence Tao
Dear Walt,
That’s an interesting point. Actually it seems that there are two sorts of results involving polynomial bounds which are sort of dual to each other; one tries to establish polynomial bounds as a conclusion starting from a hypothesis which does not obviously exhibit polynomiality, and one tries to use polynomial bounds as a hypothesis and obtain a conclusion which is much stronger than polynomiality. Gromov’s theorem is of course in the second category, whereas most results involving polynomial bounds tend to be in the first category. It may well be that the infinitisability of the former is indeed a completely different story from the latter, as you propose.
On the other hand, I feel that Gromov’s theorem is not completely singular; it is an example of a rigidity theorem (weak or approximate structure implies strong or algebraic structure), and these seem to be all over the place in mathematics (and always incredibly useful and deep whenever they do show up).
2 June, 2007 at 4:26 am
Stephen G. Simpson
REVERSE MATHEMATICS
There is an extensive, highly developed research program in the
foundations of mathematics, known as REVERSE MATHEMATICS. The purpose of reverse mathematics is to determine the strength of the
set-existence axioms which are needed in order to prove various
specific mathematical theorems.
The specific mathematical theorems which have been analyzed from the
reverse mathematics point of view include many well known theorems
from real analysis, functional analysis, complex analysis, algebra,
geometry, and combinatorics. Recently Mummert and Simpson have
extended reverse mathematics into the realm of general topology.
The basic reference on reverse mathematics is my book Subsystems of
Second Order Arithmetic, published by Springer-Verlag in 1999. A
second edition will be published soon by the Association for Symbolic
Logic and Cambridge University Press. The subsystems which arise
there correspond to a hierarchy of stronger and stronger set existence
axioms which are potentially useful in mathematics.
Below I list some references to papers dealing with the reverse
mathematics of various theorems in countable combinatorics.
1. The analysis of Ramsey’s Theorem from the reverse mathematics point
of view is essentially due to Carl Jockusch in the 1960s, though at
the time the results were not formulated in this way.
2. A reverse mathematics analysis of Hindman’s Theorem, as well as the
closely related Auslander/Ellis Theorem from topological dynamics, has
been carried out by Andreas Blass, Jeffry Hirst, and Stephen Simpson.
(Hindman’s Theorem says that, if the positive integers are colored
with finitely many colors, then there exists an infinite set of
positive integers, X, such that all sums of finite subsets of X have
the same color.) The Blass/Hirst/Simpson paper was published in the
volume Logic and Combinatorics, in the AMS book series Contemporary
Mathematics, volume 65, 1987.
3. The reverse mathematics analysis of the Podewski/Steffens Theorem
(this is essentially the Koenig Duality Theorem for countably infinite
graphs) was begun by Aharoni/Magidor/Shore and completed by Simpson.
Simpson’s paper is in the Journal of Symbolic Logic, volume 59, 1994.
4. A reverse mathematics analysis of the Carlson/Simpson Dual Ramsey
Theorem (Advances in Mathematics, volume 53, 1984) has been carried
out by Joseph S. Miller and Reed Solomon. Unfortunately the
Miller/Solomon results are only partial, but they are interesting
nonetheless. The Miller/Solomon paper is in Archive for Mathematical
Logic, 43, 2004.
Additional information and references are available on my web pages.
Stephen G. Simpson
Professor of Mathematics
Pennsylvania State University
http://www.math.psu.edu/simpson/
2 June, 2007 at 4:34 am
Thoughts on "Soft analysis, hard analysis, and the finite convergence principle" by T.Tao « cow_gone_mad
[…] by T.Tao Posted in Uncategorized by cowgonemad on the June 2nd, 2007 Having read Soft analysis, hard analysis, and the finite convergence principle « What’s new by Terrence Tao, there are a few things I am wondering about connected to the approximation of […]
12 June, 2007 at 12:40 pm
Jeremy Avigad
Friends,
Philipp Gerhardy, Henry Towsner, and I have put the current draft of our paper, “Local stability of ergodic averages,” on arXiv (http://arxiv.org/abs/0706.1512). We show that even though one cannot, in general, compute a bound on the rate of convergence of a sequence of ergodic averages, one can bound how far one has to look to find large intervals on which such a sequence is relatively stable. Our proofs are entirely explicit, providing “hard” versions of the mean and pointwise ergodic theorems, in the sense of Tao’s post.
We have hopes that such results will bear on applications of ergodic theory to number theory and combinatorics, where, at the end of the day, one is only interested in the behavior of sequences on sufficiently large finite intervals. But we are essentially just a bunch of logicians messing around with ergodic theory, so thoughts and comments are especially welcome.
Best wishes,
Jeremy Avigad
Carnegie Mellon University
13 June, 2007 at 8:58 am
Gabor Elek
Dears,
We have a recent preprint with Balazs Szegedy
which sort of illustrates the point of Terence Tao.
On arXiv http://arxiv.org/abs/0705.2179 Limits of Hypergraphs, Removal and
Regularity Lemmas. A Non-standard Approach.
Here we used pseudo-finitary analysis – Tao mentioned in his post – as
a bridge that connects the finite and the infinite analysis.
It turns out that the Lebesgue Density Theorem can be translated to
the Hypergraph Removal Lemma of Rodl et. al. and the Rectangular
Approximability Lemma can be translated to the Hypergraph Regularity
Lemma the same way the Convergence of Bounded Monotone Sequences
is translated to the Finite Convergence Principle.
Best regards,
Gabor Elek
18 June, 2007 at 10:24 am
The Lebesgue differentiation theorem and the Szemeredi regularity lemma « What’s new
[…] June 18th, 2007 in short story This post is a sequel of sorts to my earlier post on hard and soft analysis, and the finite convergence principle. Here, I want to discuss a well-known theorem in infinitary soft analysis – the Lebesgue […]
25 June, 2007 at 9:01 am
Ultrafilters, nonstandard analysis, and epsilon management « What’s new
[…] 2007 in short story, opinion This post is in some ways an antithesis of my previous postings on hard and soft analysis. In those posts, the emphasis was on taking a result in soft analysis and converting it into a hard […]
10 July, 2007 at 4:14 pm
Norm convergence of multiple ergodic averages for commuting transformations « What’s new
[…] result by an equivalent finitary norm convergence result, in exactly the same way that the infinite convergence principle is equivalent to the finite convergence principle. (This is in marked contrast with the usual ergodic-theory approach to these problems, in which one […]
17 July, 2007 at 7:39 am
SEDSAT-2 » Blog Archive » Analysis, Analysis, Analysis
[…] I seem to be doing a lot of this now a days – analysing. Analysing what questions have a higher probability of appearing in the exams, analysing which sub-system I might need to give some more mass to, analysing how we are going to build the satellite (this is far into the future but anyway). So, it felt strange to read this post by mathematician Terrence Tao on analysis. […]
21 January, 2008 at 10:14 am
254A, Lecture 4: Multiple recurrence « What’s new
[…] Exercise 3. Use Theorem 2 to deduce a finitary version: given any positive integers m and k, there exists an integer N such that whenever is coloured into m colour classes, one of the colour classes contains an arithmetic progression of length k. (Hint: use a “compactness and contradiction” argument, as in my article on hard and soft analysis.) […]
21 January, 2008 at 9:16 pm
254A, Lecture 5: Other topological recurrence results « What’s new
[…] Remark 3. There is a stronger statement known, namely that G contains an infinitely large monochromatic subhypergraph, but we will not prove this statement, known as the infinite hypergraph Ramsey theorem. In the case k=1, these statements are the pigeonhole principle and infinite pigeonhole principle respectively, and are compared in my article on hard and soft analysis. […]
10 February, 2008 at 6:09 am
Dan
Cool article. I was however slightly confused by what hard analysis and soft analysis are supposed to be. Is there a precise (i.e. mathematico-logical) distinction? For example, does the principle “every nonempty set of natural numbers has a least element” count as a principle of soft analysis?
10 February, 2008 at 9:19 am
Terence Tao
Dear Dan,
There is not really a precise dividing line – it’s one of these “I know it when I see it” distinctions – but I would classify the well-ordering principle (every non-empty set of naturals has a least element) as a statement in soft analysis, because (a) it is typically applied to infinite sets rather than finite sets, and (b) it does not provide any quantitative bound as to what the least element is.
This particular statement is actually rather difficult to convert into a useful hard analysis statement because of a subtle issue in the innocuous adjective “non-empty”. Due to the undecidability of the halting problem, we know that there are sets of naturals that are explicitly describable (or more precisely, computably enumerable) but for which it is undecidable whether the set is empty or not. (e.g. consider the set of numbers which are the ASCII code for a proof that ZFC is inconsistent.) Because of this, the mere knowledge that a set is non-empty cannot translate to any useful bound unless that knowledge is strengthened in some concrete way. For instance, if one can actually exhibit an element in the set to prove its non-emptiness, then that clearly places an upper bound on the least element of that set, which can then be located in finite time.
10 February, 2008 at 11:50 am
Dan
OK. I think I isolated the source of my confusion. I was confused by mention of the Paris-Harrington theorem which made me think “hard analysis” was meant to be first order Peano Arithmetic, and “soft analysis” was meant to be second or higher order arithmetic, terms which I have a vague familiarity with from reading about logic. That would be a precise dividing line, thus I guess there are either first order (i.e. only using arbitrary number) “soft” statements or second or higher order (i.e. drawing upon arbitrary set of numbers or set of sets of numbers, etc.) “hard” statements. And in fact for the epsilon-delta definition of limit and basic real analysis, you do need higher order, so I guess this is indeed the case.
I have seen a proof of Strengthened Finite Ramsey theorem (the one that Paris-Harrington refers to) before and it didn’t seem too complicated or seem to use difficult concepts. I shall have to check it out. Thanks for the reply!
10 February, 2008 at 1:21 pm
254A, Lecture 10: The Furstenberg correspondence principle « What’s new
[…] describe the simple but fundamental Furstenberg correspondence principle which connects the “soft analysis” subject of ergodic theory (in particular, recurrence theorems) with the “hard […]
17 March, 2008 at 8:26 am
liuxiaochuan
Dear professor:
Have you written a paper on this subject yet? I’m interested in this
subject and want to learn more.
A correction: Before the last sentence of the proof from the finite
convergence principal to the infinite convergence, I believe that is
, not .
Liu
21 March, 2008 at 1:15 pm
Terence Tao
Dear Liu,
Thanks for the correction! In the print version of this article, which I am currently preparing as part of my book for this blog, I will add a few more references, but there is not really a systematic development of these types of conversions from soft analysis to hard analysis, other than some recent papers arising from the proof theory approach to things.
21 May, 2008 at 1:01 pm
285G, Lecture 14: Stationary points of Perelman entropy or reduced volume are gradient shrinking solitons « What’s new
[…] main idea here is to exploit what I have called the infinite convergence principle in a previous post: that every bounded monotone sequence converges. In the context of -solutions, we can apply this […]
18 June, 2008 at 11:23 pm
The strong law of large numbers « What’s new
[…] coming from the monotone convergence theorem is ineffective (one could effectivise this using the finite convergence principle, but this turns out to give very poor results […]
19 June, 2008 at 11:11 am
Heavy tails of distributions of words in literary texts « Things I’m working on and thinking about… Gary E. Davis
[…] of a positive integer variable is slowly varying if as , for some number . (This is a “soft analysis” definition in Terry Tao’s sense, I believe. In the applications we have in mind it […]
31 August, 2008 at 4:08 pm
The correspondence principle and finitary ergodic theory « What’s new
[…] tools from one type of mathematics can be used to prove results in the other. (See my previous post on soft analysis and hard analysis for further […]
5 November, 2008 at 5:15 pm
Concentration compactness and the profile decomposition « What’s new
[…] fact we have the slightly stronger statement (2)). In particular, the must go to zero as (the infinite convergence principle!). If the were selected in a “greedy” manner, this shows that the asymptotic norm […]
5 January, 2009 at 8:35 am
245B, notes 1: Signed measures and the Radon-Nikodym-Lebesgue theorem « What’s new
[…] non-trivial version of the above theory that can be applied to finite sets (cf. my blog post on the relationship between soft analysis and hard analysis). The cleanest formulation is to apply it to a sequence of (increasingly large) sets, rather than […]
28 February, 2009 at 12:13 am
Tricks Wiki: Give yourself an epsilon of room « What’s new
[…] Tao Today I’d like to discuss (in the Tricks Wiki format) a fundamental trick in “soft” analysis, sometimes known as the “limiting argument” or “epsilon […]
7 March, 2009 at 9:34 am
Infinite fields, finite fields, and the Ax-Grothendieck theorem « What’s new
[…] the correspondence Serre discusses is slightly different from the one I discuss for instance in here or here, although there seems to be a common theme of “compactness” (or of model […]
14 June, 2009 at 1:54 pm
Sequential compactness theorem @ unwanted capture
[…] the kind of correspondence principle that Tao has in mind, one which was discussed in detail in an earlier post from Tao’s blog. First we have what is essentially a quantitative version of the statement […]
22 July, 2009 at 11:57 am
把一些染色定理推广到高维去 « Liu Xiaochuan’s Weblog
[…] 证明:首先,我们使用Van der waerden定理的有限形式:对于任意给定的k和m,存在N,使得只要,给集合染m种颜色,期间一定存在k长的单色等差数列。关于此种形式和一般形式的等价性,可以使用一个标准的方法证明。参看Tao教授的帖子,软分析和硬分析。 […]
14 December, 2009 at 3:37 pm
Approximate bases, sunflowers, and nonstandard analysis « What’s new
[…] Looking at the statements and proofs of these two theorems it is clear that the two results are in some sense the “same” result, except that the latter has been made sufficiently quantitative that it is meaningful in such finitary settings as . In this note I will show how this equivalence can be made formal using the language of non-standard analysis. This is not a particularly deep (or new) observation, but it is perhaps the simplest example I know of that illustrates how nonstandard analysis can be used to transfer a quantifier-heavy finitary statement, such as Theorem 2, into a quantifier-light infinitary statement, such as Theorem 1, thus lessening the need to perform “epsilon management” duties, such as keeping track of unspecified growth functions such as . This type of transference is discussed at length in this previous blog post of mine. […]
30 January, 2010 at 7:07 pm
Ultralimit analysis, and quantitative algebraic geometry « What’s new
[…] Bezout's theorem, nonstandard analysis, ultrafilters, ultralimit analysis | by Terence Tao I have blogged a number of times in the past about the relationship between finitary (or “hard”, or […]
19 March, 2010 at 9:53 pm
A computational perspective on set theory « What’s new
[…] mathematics (involving only finitely many points or numbers, etc.); I have explored this theme in a number of previous blog posts. So, one may ask: what is the finitary analogue of statements such as […]
14 December, 2010 at 12:42 am
Ultrafilters in Ramsey theory « Annoying Precision
[…] theorem uses the infinite pigeonhole principle, which is stronger; this is part of the subject of a post by Terence Tao which is quite […]
15 October, 2011 at 10:58 am
254A, Notes 6: Ultraproducts as a bridge between hard analysis and soft analysis « What’s new
[…] poor (particularly if the proof of (i) relied extensively on infinitary tools, such as limits). See this blog post for some related […]
28 August, 2012 at 7:18 am
Hard analysis, soft analysis — The Endeavour
[…] I was trained as a bird, but I’ve become more of a frog over time. Of course these categories are not exclusive. Frogs sometimes act as birds and vice versa. The categories of hard and soft analysis are not exclusive either. See Terry Tao’s article on relating hard and soft analysis. […]
1 October, 2012 at 1:02 am
Functional Analysis – Introduction. Part II « Noncommutative Analysis
[…] of substance, which applies to concrete cases in analysis, that does not involve some kind of hard analysis, some epsilonaustics, some sweat. In our example, the hard analysis comes in the proof of the […]
25 October, 2012 at 10:10 am
Walsh’s ergodic theorem, metastability, and external Cauchy convergence « What’s new
[…] which is useful in situations in which one does not expect a uniform convergence rate; see this previous blog post for some discussion of metastability. When interpreted in a finitary setting, this concept requires […]
23 November, 2012 at 3:57 am
The three Moirai, continued « chorasimilarity
[…] Now the construction is finished, let us let the Moirai to do their job. Finally, I shall recall my real goal, which I have never forgot. The real goal is to pass from understanding of the power of this lambda calculus sector of graphic lambda calculus to the real deal, called “computing with space”, namely to understand space from a computational perspective, not as a given receptacle, but as a small list of procedures along with some impossible to verify assertions (like that we may rescale indefinitely space), see “emergent algebras”, which can always be eliminated a posteriori, by a kind of finitization procedure. […]
3 December, 2012 at 5:35 pm
The spectral proof of the Szemeredi regularity lemma « What’s new
[…] with for all . Applying (3) and the pigeonhole principle (or the finite convergence principle, see this blog post), we can find such […]
10 May, 2013 at 10:05 pm
Bird’s-eye views of Structure and Randomness (Series) | Abstract Art
[…] Analogies between finitary and infinitary math (adapted from “Soft analysis, hard analysis and the finite convergence principle“) […]
17 May, 2013 at 11:50 am
andrescaicedo
Jennifer Chayes’s link is outdated. The current link should be http://research.microsoft.com/en-us/um/people/jchayes/
[Corrected, thanks – T.]
17 May, 2013 at 9:28 pm
Frank
Dear professor:
Sorry for my ignorance. In my opinion, the proof of the part (the infinite convergence principle implying the finite convergence principle) is imperfect. The compactness or completeness of the real line might as well be avoided, since the theorem, i.e, the monotone sequence of rational numbers on is Cauchy as well. The compactness of depends on that, I think.
20 May, 2013 at 7:04 am
Analogies between finitary and infinitary math | Abstract Art
[…] [1.3] Soft analysis, hard analysis, and the finite convergence principle […]
29 May, 2013 at 12:40 pm
Topological order for mixed states | Tobias J. Osborne's research notes
[…] The definition provided by Hastings is much closer in spirit with this second definition. (It’s worth noting that by working in the infinite lattice size limit we can avoid lots of s and s; we can make our soft analysis definition a hard analysis definition if we like in the standard way.) […]
30 August, 2013 at 12:20 pm
O intâlnire cu informatica teoretică românească: FMI@150 Ziua I | Isarlâk
[…] de faptul că domeniul este unul interesant vă puteți convinge de exemplu citind această discuție pe blogul lui Terry Tao. Ar fi interesant de văzut câte din lucrurile din acest domeniu pot fi automatizate. Impresia pe […]
18 September, 2015 at 4:46 pm
The logarithmically averaged Chowla and Elliott conjectures for two-point correlations; the Erdos discrepancy problem | What's new
[…] at some point one must reach a metastable region (cf. the finite convergence principle discussed in this previous blog post), within which very little mutual information can be shared between the sequence and the graph . […]
12 December, 2016 at 6:40 pm
Hilbert’s Curve, and the usefulness of infinite results in a finite world | Coding Craze
[…] Soft analysis, hard analysis, and the finite convergence principle c++ programs […]
14 December, 2016 at 8:28 am
Hilbert’s Curve, and the usefulness of infinite results in a finite world | Coding Tweaks
[…] Soft analysis, hard analysis, and the finite convergence principle c++ programs […]
27 January, 2017 at 6:18 am
Hard analysis, soft analysis
[…] I was trained as a bird, but I’ve become more of a frog over time. Of course these categories are not exclusive. Frogs sometimes act as birds and vice versa. The categories of hard and soft analysis are not exclusive either. See Terry Tao’s article on relating hard and soft analysis. […]