Dustin Lang (Toronto) and I spent time discussing this strongly worded paper about combining images. The paper makes extremely strong claims about what its method for co-adding images can do; it claims that the combined image is “optimal” for any measurement of any time-independent quantity of any kind in the original images. The word “optimal” is one I'm allergic to: For one, once you have written down your assumptions with sufficient precision, there is no optimal method, there is just one method! For two, optimal must be for a specific purpose, or set of assumptions. So while it is probably true (we are looking at it) that this paper delivers a method that is optimal for some purposes, it cannot be for any purpose whatsoever.
I guess I have a few general philosophical points to make here: Methods flow from assumptions! So if you can clearly state a complete set of assumptions, you will fully specify your method; there will be only one method that makes sense or is justifiable under those assumptions. It will therefore be (trivially) optimal for your purposes. That is, any well-specified method is optimal, by construction. And methods are defined not by what they do but by what they don't do. That is, your job when you deliver a method is to explain all the ways that it won't work and will fail and won't be appropriate in real-world situations. Because most people are trying to figure out if their problem is appropriate to your method! This means that much of the discussion of a methodological paper should be about how or why the method will fail or become wrong as the assumptions are violated. And finally, a method that relies on you knowing your PSF perfectly, and having perfectly registered images, and having no rotations of the detector relative to the the sky, and having all images through exactly the same atmospheric transmission, and having all images taken with the same filter and detector, and having no photon noise beyond background noise, and having perfect flat-fielding, and having identical noise in every pixel, and having absolutely no time variability is not a method that is optimal for any measurement. That said, the paper contains some extremely good and important ideas, and we will be citing it positively.
On the subject of that particular paper, it's worth noting that this is essentially the same stuff discovered (but never published) by Kaiser in 2004. What I didn't know until recently was that there was actually a practical implementation and analysis of the method back in 2011 that *was* published. I'm not sure I agree with their conclusions in general (I don't think they realize how much those conclusions depend on the distribution of seeing). I also *think* I've convinced myself that this method is a limiting case of IMCOM, but that's much harder to spot.
ReplyDeleteAnyhow, I completely agree with your analysis - the reason this method has not become popular (yet!) has less to do with people being unaware of it than it does with the difficulty of validating how much its very restrictive assumptions matter in different realistic scenarios.