Lecture 15 Summary

« Previous: Lecture 14 Summary Next: Lecture 16 Summary »

Briefly discussed Golub–Kahn bidiagonalization method for SVD, just to get the general flavor. At this point, however, we are mostly through with details of dense linear-algebra techniques: the important thing is to grasp the fundamental ideas rather than zillions of little details, since in practice you're just going to use LAPACK anyway.

Started discussing (at a very general level) a new topic: iterative algorithms, usually for sparse matrices, and in general for matrices where you have a fast way to compute Ax matrix-vector products but cannot (practically) mess around with the specific entries of A.

Discussed iterative methods in general, briefly mentioned sparse-direct methods (which we will come back to later), the sources of special structure (e.g. sparsity) that allow you to have rapid matrix-vector products with large matrices.

Iterative methods. Discussed the common circumstances where Ax matrix-vector products are fast (sparse matrices, spectral methods with FFTs, integral-equations with fast multipole etc.). General idea of starting with a guess for the solution (e.g. a random vector) and iteratively improving.

Emphasized that there are many iterative methods, and that there is no clear "winner" or single bulletproof library that you can use without much thought (unlike LAPACK for dense direct solvers). It is problem-dependent and often requires some trial and error. Then there is the whole topic of preconditioning, which we will discuss more later, which is even more problem-dependent. Briefly listed several common techniques for linear systems (Ax=b) and eigenproblems (Ax=λx or Ax=λBx).

Gave simple example of power method, which we already learned. This, however, only keeps the most recent vector Anv and throws away the previous ones. Introduced Krylov subspaces, and the idea of Krylov subspace methods: find the best solution in the whole subspace spanned by v,Av,...,An-1v.