In a day of proposal and letter writing, Fadely came by for a work meeting. We discussed all his projects and publications and priorities. On the HST WFC3 self-calibration project, he is finding that the TinyTim PSF model is not good enough for our purposes: If we use it we will get a very noisy pixel-level flat. So we decided we have to suck it up and build our own model. Then we realized that in any small patch of the detector, we can probably make a pretty good model just empirically from all the stellar sources we see; the entire HST Archive is quite a bit of data. Other decisions include: We will model the pixel-convolved PSF, not the optical PSF alone. There is almost no reason to ever work with anything other than the pixel-convolved PSF; it is easier to infer (smoother) and also easier to use (you just sample it, you don't have to convolve it). We will work on a fairly fine sub-pixel grid to deal with the fact that the detector is badly sampled. We will only do a regularized maximum likelihood or MAP point estimate, using convex optimization. If all that works, this won't set us back too far.
MAP isn't invariant under change of variables, which makes it feel like a less-than-ideal estimator to me (even if in practice it's basically as reasonable, for reasonable choices of variables, as any other). I think integral measures (median, mean) are preferable for this reason. (Max likelihood is also invariant under change of variables, but ignores prior info.)
ReplyDelete