Well, we failed. We don't have a complete paper about fitting data with arbitrarily censored data and arbitrarily unreliable noise and censoring information. The many changes we made yesterday led to constant re-starting of the runs, and there are many unresolved issues in our use of black-box function integrators and optimization without analytic derivatives. So we aren't done. Ah well, we tried. It has been a great week; I really love irresponsibly ignoring all my real job requirements and hacking. Bloom pointed out that it probably isn't optimal for my LTFDFCF. (I guess I don't obey my own rules.)
In the morning I gave my seminar at the Center for Time Domain Informatics. The audience was mixed and I argued (to the choir, I think) that astronomical catalogs are very limited and we have to explore—as a community—probabilistic replacements. Currently I think it has to be the raw data instrumented with hypothesis-testing software (that is, an executable likelihood function), and (at the present day) this would mean living in the cloud, for practical reasons. But I talked more about promise than implementation. After my talk, Lisa Randall (Harvard) suggested that there might be some relationships between these ideas and next-generation ideas for inference with data from the LHC and other particle experiments.
No comments:
Post a Comment