2008-07-25

data reduction, data compression, and probabilistic inference

I spent most of my research time today thinking about how to analyze large collections of images. Lang and I are coming around to a data compression framework: We add or change or make more precise model parameters (such as star positions and fluxes and adjustments to the PSF or flatfield) when adding or changing or making more precise those parameters reduces the total information content in (smallest compressed size of) the residuals by more than it costs us in an information sense (again, compressed size) to add to the parameters. This is data reduction.

There is a full worked-out theory of inference based on data compression; in fact to the extremists, the only probabilistic theory of inference associates probabilities with bit lengths of the model description (lossless compression) of the data stream. A beautiful (and freely available on the web; nice!) book on the subject is Information Theory, Inference, and Learning Algorithms by David MacKay.

For astronomical imaging, the best compression scheme ought to be a physical model of the sky, a physical model of every camera, and, for each image, its pointing on the sky, the camera from which it came, and residuals. The parameters of the sky model constitutes the totality of our astronomical knowledge, and we can marginalize over the rest. I love the insanity of that.

No comments:

Post a Comment