You are currently browsing the category archive for the ‘math.NA’ category.

I recently learned about a curious operation on square matrices known as sweeping, which is used in numerical linear algebra (particularly in applications to statistics), as a useful and more robust variant of the usual Gaussian elimination operations seen in undergraduate linear algebra courses. Given an {n \times n} matrix {A := (a_{ij})_{1 \leq i,j \leq n}} (with, say, complex entries) and an index {1 \leq k \leq n}, with the entry {a_{kk}} non-zero, the sweep {\hbox{Sweep}_k[A] = (\hat a_{ij})_{1 \leq i,j \leq n}} of {A} at {k} is the matrix given by the formulae

\displaystyle  \hat a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \hat a_{ik} := \frac{a_{ik}}{a_{kk}}

\displaystyle  \hat a_{kj} := \frac{a_{kj}}{a_{kk}}

\displaystyle  \hat a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. Thus for instance if {k=1}, and {A} is written in block form as

\displaystyle  A = \begin{pmatrix} a_{11} & X \\ Y & B \end{pmatrix} \ \ \ \ \ (1)

for some {1 \times n-1} row vector {X}, {n-1 \times 1} column vector {Y}, and {n-1 \times n-1} minor {B}, one has

\displaystyle  \hbox{Sweep}_1[A] = \begin{pmatrix} -1/a_{11} & X / a_{11} \\ Y/a_{11} & B - a_{11}^{-1} YX \end{pmatrix}. \ \ \ \ \ (2)

The inverse sweep operation {\hbox{Sweep}_k^{-1}[A] = (\check a_{ij})_{1 \leq i,j \leq n}} is given by a nearly identical set of formulae:

\displaystyle  \check a_{ij} := a_{ij} - \frac{a_{ik} a_{kj}}{a_{kk}}

\displaystyle  \check a_{ik} := -\frac{a_{ik}}{a_{kk}}

\displaystyle  \check a_{kj} := -\frac{a_{kj}}{a_{kk}}

\displaystyle  \check a_{kk} := \frac{-1}{a_{kk}}

for all {i,j \in \{1,\dots,n\} \backslash \{k\}}. One can check that these operations invert each other. Actually, each sweep turns out to have order {4}, so that {\hbox{Sweep}_k^{-1} = \hbox{Sweep}_k^3}: an inverse sweep performs the same operation as three forward sweeps. Sweeps also preserve the space of symmetric matrices (allowing one to cut down computational run time in that case by a factor of two), and behave well with respect to principal minors; a sweep of a principal minor is a principal minor of a sweep, after adjusting indices appropriately.

Remarkably, the sweep operators all commute with each other: {\hbox{Sweep}_k \hbox{Sweep}_l = \hbox{Sweep}_l \hbox{Sweep}_k}. If {1 \leq k \leq n} and we perform the first {k} sweeps (in any order) to a matrix

\displaystyle  A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}

with {A_{11}} a {k \times k} minor, {A_{12}} a {k \times n-k} matrix, {A_{12}} a {n-k \times k} matrix, and {A_{22}} a {n-k \times n-k} matrix, one obtains the new matrix

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_k[A] = \begin{pmatrix} -A_{11}^{-1} & A_{11}^{-1} A_{12} \\ A_{21} A_{11}^{-1} & A_{22} - A_{21} A_{11}^{-1} A_{12} \end{pmatrix}.

Note the appearance of the Schur complement in the bottom right block. Thus, for instance, one can essentially invert a matrix {A} by performing all {n} sweeps:

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_n[A] = -A^{-1}.

If a matrix has the form

\displaystyle  A = \begin{pmatrix} B & X \\ Y & a \end{pmatrix}

for a {n-1 \times n-1} minor {B}, {n-1 \times 1} column vector {X}, {1 \times n-1} row vector {Y}, and scalar {a}, then performing the first {n-1} sweeps gives

\displaystyle  \hbox{Sweep}_1 \dots \hbox{Sweep}_{n-1}[A] = \begin{pmatrix} -B^{-1} & B^{-1} X \\ Y B^{-1} & a - Y B^{-1} X \end{pmatrix}

and all the components of this matrix are usable for various numerical linear algebra applications in statistics (e.g. in least squares regression). Given that sweeps behave well with inverses, it is perhaps not surprising that sweeps also behave well under determinants: the determinant of {A} can be factored as the product of the entry {a_{kk}} and the determinant of the {n-1 \times n-1} matrix formed from {\hbox{Sweep}_k[A]} by removing the {k^{th}} row and column. As a consequence, one can compute the determinant of {A} fairly efficiently (so long as the sweep operations don’t come close to dividing by zero) by sweeping the matrix for {k=1,\dots,n} in turn, and multiplying together the {kk^{th}} entry of the matrix just before the {k^{th}} sweep for {k=1,\dots,n} to obtain the determinant.

It turns out that there is a simple geometric explanation for these seemingly magical properties of the sweep operation. Any {n \times n} matrix {A} creates a graph {\hbox{Graph}[A] := \{ (X, AX): X \in {\bf R}^n \}} (where we think of {{\bf R}^n} as the space of column vectors). This graph is an {n}-dimensional subspace of {{\bf R}^n \times {\bf R}^n}. Conversely, most subspaces of {{\bf R}^n \times {\bf R}^n} arises as graphs; there are some that fail the vertical line test, but these are a positive codimension set of counterexamples.

We use {e_1,\dots,e_n,f_1,\dots,f_n} to denote the standard basis of {{\bf R}^n \times {\bf R}^n}, with {e_1,\dots,e_n} the standard basis for the first factor of {{\bf R}^n} and {f_1,\dots,f_n} the standard basis for the second factor. The operation of sweeping the {k^{th}} entry then corresponds to a ninety degree rotation {\hbox{Rot}_k: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n \times {\bf R}^n} in the {e_k,f_k} plane, that sends {f_k} to {e_k} (and {e_k} to {-f_k}), keeping all other basis vectors fixed: thus we have

\displaystyle  \hbox{Graph}[ \hbox{Sweep}_k[A] ] = \hbox{Rot}_k \hbox{Graph}[A]

for generic {n \times n} {A} (more precisely, those {A} with non-vanishing entry {a_{kk}}). For instance, if {k=1} and {A} is of the form (1), then {\hbox{Graph}[A]} is the set of tuples {(r,R,s,S) \in {\bf R} \times {\bf R}^{n-1} \times {\bf R} \times {\bf R}^{n-1}} obeying the equations

\displaystyle  a_{11} r + X R = s

\displaystyle  Y r + B R = S.

The image of {(r,R,s,S)} under {\hbox{Rot}_1} is {(s, R, -r, S)}. Since we can write the above system of equations (for {a_{11} \neq 0}) as

\displaystyle  \frac{-1}{a_{11}} s + \frac{X}{a_{11}} R = -r

\displaystyle  \frac{Y}{a_{11}} s + (B - a_{11}^{-1} YX) R = S

we see from (2) that {\hbox{Rot}_1 \hbox{Graph}[A]} is the graph of {\hbox{Sweep}_1[A]}. Thus the sweep operation is a multidimensional generalisation of the high school geometry fact that the line {y = mx} in the plane becomes {y = \frac{-1}{m} x} after applying a ninety degree rotation.

It is then an instructive exercise to use this geometric interpretation of the sweep operator to recover all the remarkable properties about these operations listed above. It is also useful to compare the geometric interpretation of sweeping as rotation of the graph to that of Gaussian elimination, which instead shears and reflects the graph by various elementary transformations (this is what is going on geometrically when one performs Gaussian elimination on an augmented matrix). Rotations are less distorting than shears, so one can see geometrically why sweeping can produce fewer numerical artefacts than Gaussian elimination.

In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).

Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the {P \neq NP} problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The {P \neq NP} problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as {k}-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and {k}-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.

My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector {x \in {\bf R}^n} in an affine space {\{ x \in {\bf R}^n: Ax = b \}} (where {b \in {\bf R}^m} and {A \in {\bf R}^{m\times n}} are given, and {m} is typically somewhat smaller than {n}) which minimises the {\ell^1}-norm {\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|} of the vector {x}. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once {n} and {m} get moderately large (e.g. of the order of {10^6}), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix {A}. Much of the problem comes from the fact that the functional {x \mapsto \|x\|_1} is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint {Ax=b} with a convex penalty term {\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}, thus one is now trying to minimise the unconstrained functional

\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.

This functional is more convex, and is over a computationally simpler domain {{\bf R}^n} than the affine space {\{x \in {\bf R}^n: Ax=b\}}, so is easier (though still not entirely trivial) to optimise over. However, the minimiser {x^\epsilon} to this problem need not match the minimiser {x^0} to the original problem, particularly if the (sub-)gradient {\partial \|x\|_1} of the original functional {\|x\|_1} is large at {x^0}, and if {\epsilon} is not set to be small. (And setting {\epsilon} too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term

\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2

then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser {x^0} to the original problem, provided that {p} is (or more precisely, lies in) the (sub-)gradient {\partial \|x\|_1} of {\|x\|_1} at {x^0} – even if {\epsilon} is not small. But, one does not know in advance what the correct value of {p} should be, because one does not know what the minimiser {x^0} is.

With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for {x} and {p} simultaneously; given an initial guess {x^k, p^k} for both {x^k} and {p^k}, one first updates {x^k} to the minimiser {x^{k+1} \in {\bf R}^n} of the convex functional

\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)

and then updates {p^{k+1}} to the natural value of the subgradient {\partial \|x\|_1} at {x^{k+1}}, namely

\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)

(note upon taking the first variation of (1) that {p^{k+1}} is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser {x^0} even for non-small values of {\epsilon}, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).

Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth {n}-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space {{\bf R}^m}. The Nash embedding theorem guarantees that one can do this if {m} is large enough depending on {n}, but what is the minimal value of {m} one can get away with? Many years ago, my colleague Robert Greene showed that {m = \frac{n(n+1)}{2} + n} sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that {m = \frac{n(n+1)}{2}} is possible, and no better. So this suggests that {m = \frac{n(n+1)}{2}} is the threshold for the smooth problem also, but this remains open in general. The cases {n=1} is trivial, and the {n=2} case is not too difficult (if the curvature is non-zero) as the codimension {m-n} is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the {n=3} case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.

This is the ninth thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle  H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} can be found at the wiki page.

The focus is now on bounding {H_1} unconditionally (in particular, without resorting to the Elliott-Halberstam conjecture or its generalisations). We can bound {H_1 \leq H(k)} whenever one can find a symmetric square-integrable function {F} supported on the simplex {{\cal R}_k := \{ (t_1,\dots,t_k) \in [0,+\infty)^k: t_1+\dots+t_k \leq 1 \}} such that

\displaystyle  k \int_{{\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (1)

\displaystyle > 4 \int_{{\cal R}_{k}} F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k.

Our strategy for establishing this has been to restrict {F} to be a linear combination of symmetrised monomials {[t_1^{a_1} \dots t_k^{a_k}]_{sym}} (restricted of course to {{\cal R}_k}), where the degree {a_1+\dots+a_k} is small; actually, it seems convenient to work with the slightly different basis {(1-t_1-\dots-t_k)^i [t_1^{a_1} \dots t_k^{a_k}]_{sym}} where the {a_i} are restricted to be even. The criterion (1) then becomes a large quadratic program with explicit but complicated rational coefficients. This approach has lowered {k} down to {54}, which led to the bound {H_1 \leq 270}.

Actually, we know that the more general criterion

\displaystyle  k \int_{(1-\epsilon) \cdot {\cal R}_{k-1}} (\int_0^\infty F(t_1,\dots,t_k)\ dt_k)^2\ dt_1 \dots dt_{k-1} \ \ \ \ \ (2)

\displaystyle  > 4 \int F(t_1,\dots,t_k)^2\ dt_1 \dots dt_{k-1} dt_k

will suffice, whenever {0 \leq \epsilon < 1} and {F} is supported now on {2 \cdot {\cal R}_k} and obeys the vanishing marginal condition {\int_0^\infty F(t_1,\dots,t_k)\ dt_k = 0} whenever {t_1+\dots+t_k > 1+\epsilon}. The latter is in particular obeyed when {F} is supported on {(1+\epsilon) \cdot {\cal R}_k}. A modification of the preceding strategy has lowered {k} slightly to {53}, giving the bound {H_1 \leq 264} which is currently our best record.

However, the quadratic programs here have become extremely large and slow to run, and more efficient algorithms (or possibly more computer power) may be required to advance further.

This is the third thread for the Polymath8b project to obtain new bounds for the quantity

\displaystyle H_m := \liminf_{n \rightarrow\infty} (p_{n+m} - p_n),

either for small values of {m} (in particular {m=1,2}) or asymptotically as {m \rightarrow \infty}. The previous thread may be found here. The currently best known bounds on {H_m} are:

  • (Maynard) Assuming the Elliott-Halberstam conjecture, {H_1 \leq 12}.
  • (Polymath8b, tentative) {H_1 \leq 330}. Assuming Elliott-Halberstam, {H_2 \leq 330}.
  • (Polymath8b, tentative) {H_2 \leq 484{,}126}. Assuming Elliott-Halberstam, {H_4 \leq 493{,}408}.
  • (Polymath8b) {H_m \leq \exp( 3.817 m )} for sufficiently large {m}. Assuming Elliott-Halberstam, {H_m \ll e^{2m} m \log m} for sufficiently large {m}.

Much of the current focus of the Polymath8b project is on the quantity

\displaystyle M_k = M_k({\cal R}_k) := \sup_F \frac{\sum_{m=1}^k J_k^{(m)}(F)}{I_k(F)}

where {F} ranges over square-integrable functions on the simplex

\displaystyle {\cal R}_k := \{ (t_1,\ldots,t_k) \in [0,+\infty)^k: t_1+\ldots+t_k \leq 1 \}

with {I_k, J_k^{(m)}} being the quadratic forms

\displaystyle I_k(F) := \int_{{\cal R}_k} F(t_1,\ldots,t_k)^2\ dt_1 \ldots dt_k

and

\displaystyle J_k^{(m)}(F) := \int_{{\cal R}_{k-1}} (\int_0^{1-\sum_{i \neq m} t_i} F(t_1,\ldots,t_k)\ dt_m)^2

\displaystyle dt_1 \ldots dt_{m-1} dt_{m+1} \ldots dt_k.

It was shown by Maynard that one has {H_m \leq H(k)} whenever {M_k > 4m}, where {H(k)} is the narrowest diameter of an admissible {k}-tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity {M_k} does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.

The best asymptotic bounds for {M_k} we have are

\displaystyle \log k - \log\log\log k + O(1) \leq M_k \leq \frac{k}{k-1} \log k \ \ \ \ \ (1)

 

which we prove below the fold. The upper bound holds for all {k > 1}; the lower bound is only valid for sufficiently large {k}, and gives the upper bound {H_m \ll e^{2m} \log m} on Elliott-Halberstam.

For small {k}, the upper bound is quite competitive, for instance it provides the upper bound in the best values

\displaystyle 1.845 \leq M_4 \leq 1.848

and

\displaystyle 2.001162 \leq M_5 \leq 2.011797

we have for {M_4} and {M_5}. The situation is a little less clear for medium values of {k}, for instance we have

\displaystyle 3.95608 \leq M_{59} \leq 4.148

and so it is not yet clear whether {M_{59} > 4} (which would imply {H_1 \leq 300}). See this wiki page for some further upper and lower bounds on {M_k}.

The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the {H_1} bound from {600} to {330}), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.

Read the rest of this entry »

In my previous post, I briefly discussed the work of the four Fields medalists of 2010 (Lindenstrauss, Ngo, Smirnov, and Villani). In this post I will discuss the work of Dan Spielman (winner of the Nevanlinna prize), Yves Meyer (winner of the Gauss prize), and Louis Nirenberg (winner of the Chern medal). Again by chance, the work of all three of the recipients overlaps to some extent with my own areas of expertise, so I will be able to discuss a sample contribution for each of them. Again, my choice of contribution is somewhat idiosyncratic and is not intended to represent the “best” work of each of the awardees.

Read the rest of this entry »

The summer continues to allow some clearing of the backlog of projects accumulated during the academic year: Helge Holden, Kenneth Karlsen, Nils Risebro, and myself have uploaded to the arXiv the paper “Operator splitting for the KdV equation“, submitted to Math. Comp..   This paper is concerned with accurate numerical schemes for solving initial value problems for the Korteweg-de Vries equation

u_t + u_{xxx} = u u_x; \quad u(0,x) = u_0(x), (1)

though the analysis here would be valid for a wide range of other semilinear dispersive models as well.  In principle, these equations, which are completely integrable, can be solved exactly by the inverse scattering method, but fast and accurate implementations of this method are still not as satisfactory as one would like.  On the other hand, the linear Korteweg-de Vries equation

u_t + u_{xxx} = 0; \quad u(0,x) = u_0(x) (2)

can be solved exactly (with accurate and fast numerics) via the (fast) Fourier transform, while the (inviscid) Burgers equation

u_t = u u_x; \quad u(0,x) = u_0(x) (3)

can also be solved exactly (and quickly) by the method of characteristics.  Since the KdV equation is in some sense a combination of the equations (2) and (3), it is then reasonable to hope that some combination of the solution schemes for (2) and (3) can be used to solve (1), at least in some approximate sense.

One way to do this is by the method of operator splitting.  Observe from the formal approximation e^{A \Delta t} \approx 1 + A \Delta t + O( \Delta t^2) (where \Delta t should be thought of as small, and A is some matrix or linear operator), that one has

e^{(A+B) \Delta t} u_0 = e^{A \Delta t} e^{B \Delta t} u_0 + O( \Delta t^2 ), (4)

[we do not assume A and B to commute here] and thus we formally have

e^{n (A+B) \Delta t} u_0 = (e^{A \Delta t} e^{B \Delta t})^n u_0 + O( \Delta t ) (5)

if n = T/\Delta t for some fixed time T (thus n = O(1/\Delta t)).  As a consequence, if one wants to solve the linear ODE

u_t = (A+B) u; \quad u(0) = u_0 (1′)

for time T = n \Delta t, one can achieve an approximate solution (accurate to order \Delta t) by alternating n times between evolving the ODE

u_t = A u (2′)

for time \Delta t, and evolving the ODE

u_t = B u (3′)

for time \Delta t, starting with the initial data u_0.

It turns out that this scheme can be formalised, and furthermore generalised to nonlinear settings such as those for the KdV equation (1).  More precisely, we show that if u_0 \in H^s({\Bbb R}) for some s \geq 5, then one can solve (1) to accuracy O(\Delta t) in H^s norm for any fixed time T = n \Delta t by alternating between evolving (2) and (3) for times \Delta t (this scheme is known as Godunov splitting).

Actually, one can obtain faster convergence by modifying the scheme, at the cost of requiring higher regularity on the data; the situation is similar to that of numerical integration (or quadrature), in which the midpoint rule or Simpson’s rule provide more accuracy than the Riemann integral if the integrand is smooth.  For instance, one has the variant

e^{n (A+B) \Delta t} = (e^{A \Delta t/2} e^{B \Delta t/2} e^{B \Delta t/2} e^{A \Delta t/2})^n + O( \Delta t^2 ) (6)

of (5), which can be seen by expansion to second order in \Delta t (or by playing around with the Baker-Campbell-Hausdorff formula).  For KdV, we can rigorously show that the analogous scheme (known as Strang splitting) involving the indicated combination of evolutions of (2) and (3) will also converge to accuracy \Delta t^2 in H^s norm, provided that u_0 \in H^s({\Bbb R}) and s \geq 17.

Read the rest of this entry »

Emmanuel Candés and I have just uploaded to the arXiv our paper “The power of convex relaxation: near-optimal matrix completion“, submitted to IEEE Inf. Theory.  In this paper we study the matrix completion problem, which one can view as a sort of “non-commutative” analogue of the sparse recovery problem studied in the field of compressed sensing, although there are also some other significant differences between the two problems.   The sparse recovery problem seeks to recover a sparse vector x \in {\Bbb R}^n from some linear measurements Ax = b \in {\Bbb R}^m, where A is a known m \times n matrix.  For general x, classical linear algebra tells us that if m < n, then the problem here is underdetermined and has multiple solutions; but under the additional assumption that x is sparse (most of the entries are zero), it turns out (under various hypotheses on the measurement matrix A, and in particular if A contains a sufficient amount of “randomness” or “incoherence”) that exact recovery becomes possible in the underdetermined case.  Furthermore, recovery is not only theoretically possible, but is also computationally practical in many cases; in particular, under some assumptions on A, one can recover x by minimising the convex norm \| x \|_{\ell^1} over all solutions to Ax=b.

Now we turn to the matrix completion problem.  Instead of an unknown vector x \in {\Bbb R}^n, we now have an unknown matrix M = (m_{ij})_{i \in [n_1], j \in [n_2]} \in {\Bbb R}^{n_1 \times n_2} (we use the shorthand [n] := \{1,\ldots,n\} here). We will take a specific type of underdetermined linear measurement of M, namely we pick a random subset \Omega \subset [n_1] \times [n_2] of the matrix array [n_1] \times [n_2] of some cardinality 1 \leq m \leq n_1 n_2, and form the random sample P_\Omega(M) := (m_{ij})_{(i,j) \in \Omega} \in {\Bbb R}^{\Omega} of M.

Of course, with no further information on M, it is impossible to complete the matrix M from the partial information P_\Omega(M) – we only have m pieces of information and need n_1 n_2.  But suppose we also know that M is low-rank, e.g. has rank less than r; this is an analogue of sparsity, but for matrices rather than vectors.  Then, in principle, we have reduced the number of degrees of freedom for M from n_1 n_2 to something more like O( r \max(n_1,n_2) ), and so (in analogy with compressed sensing) one may now hope to perform matrix completion with a much smaller fraction of samples, and in particular with m close to r \max(n_1,n_2).

This type of problem comes up in several real-world applications, most famously in the Netflix prize.  The Netflix prize problem is to be able to predict a very large ratings matrix M, whose rows are the customers, whose columns are the movies, and the entries are the rating that each customer would hypothetically assign to each movie.  Of course, not every customer has rented every movie from Netflix, and so only a small fraction P_\Omega(M) of this matrix is actually known.  However, if one makes the assumption that most customers’ rating preference is determined by only a small number of characteristics of the movie (e.g. genre, lead actor/actresses, director, year, etc.), then the matrix should be (approximately) low rank, and so the above type of analysis should be useful (though of course it is not going to be the only tool of use in this messy, real-world problem).

Actually, one expects to need to oversample the matrix by a logarithm or two in order to have a good chance of exact recovery, if one is sampling randomly.  This can be seen even in the rank one case r=1, in which M=uv^* is the product of a column vector and a row vector; let’s consider square matrices n_1=n_2=n for simplicity.  Observe that if the sampled coordinates \Omega completely miss one of the rows of the matrix, then the corresponding element of u has gone completely unmeasured, and one cannot hope to complete this row of the matrix.   Thus one needs to sample every row (and also every column) of the n \times n matrix.  The solution to the coupon collector’s problem then tells us that one needs about O(n \log n) samples to achieve this goal.  In fact, the theory of Erdős-Rényi random graphs tells us that the bipartite graph induced by \Omega becomes almost surely connected beyond this threshold, which turns out to be exactly what is needed to perform matrix completion for rank 1 matrices.

On the other hand, one cannot hope to complete the matrix if some of the singular vectors of the matrix are extremely sparse.  For instance, in the Netflix problem, a singularly idiosyncratic customer (or dually, a singularly unclassifiable movie) may give rise to a row or column of M that has no relation to the rest of the matrix, occupying its own separate component of the singular value decomposition of M; such a row or column is then impossible to complete exactly without sampling the entirety of that row or column.  Thus, to get exact matrix completion from a small fraction of entries, one needs some sort of incoherence assumption on the singular vectors, which spreads them out across all coordinates in a roughly even manner, as opposed to being concentrated on just a few values.

In a recent paper, Candés and Recht proposed solving the matrix completion problem by minimising the nuclear norm (or trace norm)

\|M\|_* = \sum_{i=1}^{\min(n_1,n_2)} \sigma_i(M) = \hbox{tr}( M M^*)^{1/2}

amongst all matrices consistent with the observed data P_\Omega(M).  This nuclear norm is the non-commutative counterpart to the \ell^1 norm for vectors, and so this algorithm is analogous to the \ell^1 minimisation (or basis pursuit) algorithm which is effective for compressed sensing (though not the only such algorithm for this task).  They showed, roughly speaking, that exact matrix completion (for, say, square matrices n_1=n_2=n for simplicity) is ensured with high probability so long as the singular vectors obey a certain incoherence property (basically, their \ell^\infty norm should be close to the minimal possible value, namely O(1/\sqrt{n})), so long as one had the condition

m \gg n^{1.2} r \log n.

This differs from the presumably optimal threshold of nr \log n by a factor of about n^{0.2}.

The main result of our paper is to mostly eliminate this gap, at the cost of a stronger hypothesis on the matrix being measured:

Main theorem. (Informal statement)  Suppose the n_1 \times n_2 matrix M has rank r and obeys a certain “strong incoherence property”.  Then with high probability, nuclear norm minimisation will recover M from a random sample P_\Omega(M) provided that m \gg n r \log^{O(1)} n, where n := \max(n_1,n_2).

A result of a broadly similar nature, but with a rather different recovery algorithm and with a somewhat different range of applicability, was recently established by Keshavan, Oh, and Montanari.  The strong incoherence property is somewhat technical, but is related to the Candés-Recht incoherence property and is satisfied by a number of reasonable random matrix models.  The exponent O(1) here is reasonably civilised (ranging between 2 and 9, depending on the specific model and parameters being used).

Read the rest of this entry »

This week I am in Australia, attending the ANZIAM annual meeting in Katoomba, New South Wales (in the picturesque Blue Mountains). I gave an overview talk on some recent developments in compressed sensing, particularly with regards to the basis pursuit approach to recovering sparse (or compressible) signals from incomplete measurements. The slides for my talk can be found here, with some accompanying pictures here. (There are of course by now many other presentations of compressed sensing on-line; see for instance this page at Rice.)

There was an interesting discussion after the talk. Some members of the audience asked the very good question as to whether any a priori information about a signal (e.g. some restriction about the support) could be incorporated to improve the performance of compressed sensing; a related question was whether one could perform an adaptive sequence of measurements to similarly improve performance. I don’t have good answers to these questions. Another pointed out that the restricted isometry property was like a local “well-conditioning” property for the matrix, which only applied when one viewed a few columns at a time.

Avi Wigderson‘s final talk in his Distinguished Lecture Series on “Computational complexity” was entitled “Arithmetic computation“; the complexity theory of arithmetic circuits rather than boolean circuits.

Read the rest of this entry »

The Distinguished Lecture Series at UCLA for this winter quarter is given by Avi Wigderson, who is lecturing on “some topics in computational complexity“. In his first lecture on Wednesday, Avi gave a wonderful talk (in his inimitably entertaining style) on “The power and weakness of randomness in computation“. The talk was based on these slides. He also gave a sort of “encore” on zero-knowledge proofs in more informal discussions after the main talk.

As always, any errors here are due to my transcription and interpretation.

Read the rest of this entry »

Archives