2019-10-18

combining data; learning rates

Marla Geha (Yale) crashed Flatiron today and we spent some time talking about a nice problem in spectroscopic data analysis: Imagine that you have a pipeline that works on each spectrum (or each exposure or each plate or whatever) separately, but that the same star has been observed multiple times. How do you post-process your individual-exposure results so that you get combined results that are the same as you would have if you had processed them all simultaneously. You want the calibration to be independent for each exposure, but he stellar template to be the same, for example. This is very related to the questions that Adrian Price-Whelan (Flatiron) and I have been solving in the last few weeks. You have to carry forward enough marginalized likelihood information to combine later. This involves marginalizing out the individual-exposure parameters but not the shared parameters. (And maybe making some additional approximations!)

As is not uncommon on a Friday, Astronomical Data Group meeting was great! So many things. One highlight for me was that Lily Zhao (Yale) has diagnosed—and figured out strategies related to—problems we had in wobble with the learning rate on our gradient descent. I hate optimization! But I love it when very good people diagnose and fix the problems in our optimization code!

No comments:

Post a Comment