2018-09-25

structure of all models, ever; correlation-function representation

Early in the day I had a long conversation with Leistedt (NYU) about the philosophy of our machine-learning projects. We refined further our view that the machine learning should be part of a larger causal structure that makes sense. My position is that you can think of most (hard) physics problems as having some kind of generalized graphical model with a three high-level boxes. One is called “things I know well but don't care about”, which is things like noise model, instrument model, and calibration parameters. Another is called “things I don't know and don't care about” which is things like foregrounds, backgrounds, and other nuisances. And the last is called “things I don't know and deeply care about”. This last one is our rigid physics model. And the middle one is where the machine learning goes! If we could build models like this very generally, we would be infinitely powerful.

At mid-day, Storey-Fisher and I talked about all the things we could do if we had a correlation function that is not values-in-bins, but was a linear combination of functions. We could look for cosmological gradients. We could do clustering multipoles at small scales, we could estimate the correlation function and power spectrum simultaneously, we could extract Fisher-optimal summary statistics for cosmological parameter estimation. And all these things are possible with our new correlation-function estimator. Next step: Getting the code fast enough to do non-trivial tests.

In the astro seminar at NYU, Savvas Koushiappas (Brown) showed us weak but very interesting evidence that maybe there is a dark-matter annihilation signature in the NASA Fermi data on the Reticulum II dwarf galaxy. Obviously this is incredibly important if it holds up as more data and better calibrations come.

No comments:

Post a Comment