jump to navigation

Alternating methods for SVD November 9, 2010

Posted by Sarah in Uncategorized.
Tags: , ,

Last week’s applied math seminar was Arthur Szlam, talking about alternating methods for SVD. I’m putting up some notes here; several results are presented without proof.

Given an m x n matrix A, we want to find the best rank k approximation to A.
There is an explicit solution to this problem, known as the singular value decomposition. We write
A = U \Sigma V^T
where \Sigma is diagonal, U, V are orthogonal. The columns of U are the eigenvectors of A^T A and the columns of V are the eigenvectors of A A^T. This costs O(m^2 n) operations, and it’s a completely explicit solution. To get a rank-k approximation, we take
\hat{U} \hat{\Sigma} \hat{V}^T
where the “hat” means taking the first k columns of each matrix.
This is the best rank-k approximation, both in the sense of Frobenius norm and operator norm.

However, if k << m, O(m^2 n) time is wasteful. There are recent attempts to do this faster, in O(mkn) time.

We fix k, and a small oversampling parameter p, and let l = k + p. Draw an l \times m random Gaussian matrix \Omega, and set Q^T = \Omega A.
Calculate the SVD, \tilde{U} \tilde{\Sigma} \tilde{V}^T = Q, and set U to be the first k columns of \tilde{U}, etc.

Why does this work?

Well, pretend A is really of rank 5, for instance. Hit A with a random matrix. It is unlikely that any vector will be in the kernel of A. Whatever's left is in the range. So we're very likely to have the full range of A; we probably preserve the SVD. However, in real life we don't have a known rank k matrix.

What we have is a guarantee of quality, in terms of the decay of the singular values:
E ||A - A Q Q^T|| \le [ 1 + \sqrt{\frac{k}{p-1}} + \frac{e \sqrt{k + p}}{p} \sqrt{m-k}] \sigma_{k +1}

Here the expected value is taken over the random Gaussian draw, the norm is the operator norm, and \sigma_{k + 1} is the k + 1th singular value; if A is really rank k, this is zero, and the expected error is therefore zero. If \sigma_{k+1} is small, then the bound is good. We're in danger if m is large and \sigma_{k+1} is not super small.

This actually happens, unfortunately. For example, in term document matrices, where a collection of text documents form the rows, and words form the columns, with a word count in each matrix entry indicating how often a word was used in each document. (A little imagination should indicate the importance of these kinds of matrices for semantic analysis.) These matrices are sparse and large, but their singular values decay slowly. So what can we do here?

We hit \Omega by some more powers of A, to drive down the smaller eigenvalues. In practice, 10 or 15 times.
Q_q^T = \Omega A (A^TA)^q
Now we have a new estimate, depending on the power q.

E ||A - A Q_q Q_q^T|| \le [1 + \sqrt{\frac{k}{p-1}} + \frac{p \sqrt{k + p}}{p} \sqrt{m-k}]^{1/2q + 1} \sigma_{k+1}

This works, but it can be memory intensive to save all the intermediate powers. This is where the "alternating" method comes in.

Here we look at a related problem: matrix completion. You're given a subset of the entries of a matrix A, and you know A is low rank. How do you fill in the rest of the entries? The simplest approach is alternating least squares.

Write A = PQ^T where P is m-by-k and Q is n-by-k.
Alternate between updates of P and Q:
P_{j+1} = argmin ||PQ_j^T - A||^2
Q_{j+1} = argmin ||P_{j-1} Q^T - A||
These are least-squares problems, and can be solved explicitly using

Step 1: form m \times l Gaussian, P_0.
Step 2: form n \times l matrix Q_0 st
Q_0 R_0 = (P_0^T A)^T
(a QR-decomposition, with Q orthogonal and R upper triangular.)
Step 3: iterate for j = 0 \dots j = q-1
P_{j+1} R_{2j - 1} = A Q_j
Q_{j + 1} R_{2j +2} = (P_{j+1}^T A)^T

Coming back to our own problem, we can use the same formula to calculate Q_q^T.
Q_j = (A^T A)^j A^T P_o (\prod_{s = 0}^{2j} R_s^{-1}.

This turns out to work well in practice; the example set was a database of labeled faces, which have the first 20 eigenvalues decaying fast, then the next decaying very slowly. This kind of alternating SVD works well for faces.



1. Igor Carron - November 10, 2010

Thanks Sarah.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: