2012-05-02

imaging, imaging, imaging, Atlas

After contemplating the wisdom of a Brewer comment on my post of two days ago, I reformulated my issue with resolution vs signal-to-noise: Deconvolution with the true PSF leads to very noisy true scenes (for lots of reasons not limited to the scene being much more informative than the data on which it is based, and small issues with scene representation being corrected by the generation of large, nearly canceling positive and negative fluctuations in the scene). I want to deconvolve with something narrower than the PSF, but which captures its speckly (or multiple-imaging) structure.

I succeeded in formulating that desire in code and it works. The idea is that I fit the PSF for each data image with a mixture of fixed-width Gaussians, but when I use the PSF to deconvolve the image, I use not the mixture of Gaussians but a mixture of delta-functions with the same positions and amplitudes but no widths. That is, I (in some sense) deconvolve the PSF before using the deconvolved PSF to deconvolve the scene. This prevents the code from deconvolving fully, and/or leaves a band-limited scene, and/or leaves the scene well sampled. Not sure if I can justify any of this, but it sure does work well in the (very hard) test case Bianco gave us.

My despair of yesterday lifted, the signal-to-noise appeared to increase with the amount of data, while the angular resolution of the scene held constant, and I conjectured that when we run on the full set of thousands of images we will get even more signal-to-noise without loss of angular resolution. This is the point: With traditional lucky imaging (or TLI), you shift-and-add the best images. Where you set that cut (best vs non-best) sets the angular resolution and signal-to-noise of your stack; they are inversely related. With the code we now have, I conjecture that we will get the signal-to-noise of the full imaging set but the angular resolution of the best. I hope I am right.

On a related note, Fergus and I talked about one important difference between computer vision and astronomy: In computer vision, the image produced by some method is the result. In astronomy, an image produced by some pipeline is not the result, it is something that is measured to produce the result. This puts very strong constraints on the astronomers: They have to produce images that can be used quantitatively.

I also did some work on the Atlas, both writing (one paragraph a day is the modest goal) and talking with Patel and Mykytyn, my two undergraduate research assistants at NYU.

2 comments:

  1. Magain et al (1998) provide some justification via the sampling theorem: http://adsabs.harvard.edu/abs/1998ApJ...494..472M Fred Courbin has been deconvolving stacks of images for years in this way, with very nice results.

    ReplyDelete
  2. @Phil: Nice! That is exactly what I am looking for. Who said blogging is a waste of time?

    ReplyDelete