2010-08-27

spanning a linear space

Tsalmantza and I confronted the degeneracies you have when you use vectors to span a linear space: The components of our spectral basis can be reordered and renormalized and made into linear combinations of one another, all without changing any of the final results. This means (a) really you should never plot the basis vectors, but really just plot the fits of the basis vectors to real objects; it is not the basis vectors that are the result of your optimization, it is the subspace they span. And (b) really you should break these degeneracies as you go, so your code outputs something somewhat predictable! We broke the degeneracies the same way PCA does, except much better (of course!): We normalize them sensibly (all have the same sum of squares), orthogonalize them by diagonalizing the squared matrix of their coefficients when fit to the data, and sort them by eigenvalue, most important first.

Now the code we have outputs exactly what PCA outputs, except that it optimizes chi-squared not the unscaled sum of squares, and is therefore a much better representation of the data. It is a plug-in replacement for PCA that is better in every respect. Have I mentioned that I am excited about this? Now we must write the method paper and then finish all our myriad projects we are doing using this.

No comments:

Post a Comment