In separate conversations, Lang and I and Marshall and I have been talking about image modeling and deconvolution. I finally realized today something very fundamental about deconvolution: A deconvolved image will never have a well-defined point-spread function, because the high signal-to-noise parts of the image (think bright stars in astronomy) will deconvolve well, while the low signal-to-noise parts (think faint stars) won't. So the effective point-spread function will depend on the brightness of the source.
Properly done, image modeling or deconvolution—in some sense—maps the information in the image, it doesn't really make a higher resolution
image in the sense that astronomers usually use the word resolution
.
This all gets back to the point that you shouldn't really do anything complicated like deconvolution unless you have really figured out that for your very specific science goals, it is the only or best way to achieve them. For most tasks, deconvolution per se is not the best thing to be doing. Kind of like PCA, which I have been complaining about recently.
In other news, Kathryn Johnston (Columbia) agreed with me that my first baby steps on cold streams—very cold streams—is probably novel, though my literature search is not yet complete.
In yet other news, Surhud More and I figured out a correct (well, I really mean justifiable) Bayesian strategy on constraining the cosmic transparency in the visible, by marginalizing over world models.
No comments:
Post a Comment