Federica Bianco and BJ Fulton (LCOGT) pointed out that in our test of lucky imaging pipelines, our blind deconvolution runs—which try to find the high resolution scene which explains all of the data—fail to detect a faint star which is clearly there in the traditional lucky imaging stack. Fundamentally the issue is that you can't deconvolve and also maintain high signal-to-noise. We really should reconvolve our deconvolved scenes as we go. This is a big issue for those of us trying to get away from simple co-adding
of images; co-adding is very easy to understand, and preserves some aspects of signal-to-noise (though not all, because of its treat-all-data-the-same properties).
In the morning, I re-worked my forward-going Sloan Atlas schedule. Writing a book is hard, at least for someone as easily distracted as me.
"We really should reconvolve our deconvolved scenes as we go."
ReplyDeleteI completely disagree. I also don't understand how reconvolving would create new flux, which is what you would need to pick up the star.
Maybe the issue is that the prior over high resolution scenes assigns low probability to there being a point mass in the scene? The old "MaxEnt" deconvolution codes always had trouble with stars for this reason.
@Brendon: You are right, of course, we can't get s/n by convolving. But we can improve per-pixel s/n if we limit the "extent" of the deconvolution. I just figured out last night how to do that sensibly. More posts on this subject soon.
ReplyDeleteWe solved this in Bolton & Schlegel, we just didn't implement it! ;)
ReplyDeletehttp://arxiv.org/abs/0911.2689
Reconvolve with the matrix square root of the deconvolved flux correlation matrix, and you get stacking with optimal resolution, SNR, and diagonal covariance, without any priors! A.K.A., lossless likelihood-functional compression of multiple-image data of a static scene down to a single image. Then you can put on whatever priors you like.
http://arxiv.org/abs/1111.6525