what's our special sauce? and Schwarzshild modeling

My day started with Dan Foreman-Mackey (Flatiron) smacking me down about my position that it is causal structure that makes our data analyses and inferences good. The context is: Why don't we just turn on the machine learning (like convnets and GANs and etc). My position is: We need to make models that have correct causal structure (like noise sources and commonality of nuisances and so on). But his position is that, fundamentally, it is because we control model complexity well (which is hard to do with extreme machine-learning methods) and we have a likelihood function: We can compute a probability in the space of the data. This gets back to old philosophical arguments that have circulated around my group for years. Frankly, I am confused.

In our Gaia DR2 prep meeting, I had a long conversation with Wyn Evans (Cambridge) about detecting and characterizing halo substructure with a Schwarzschild model. I laid out a possible plan (pictured below). It involves some huge numbers, so I need some clever data structures to trim the tree before we compute 1020 data–model comparisons!

Late in the day, I worked with Christina Eilers (MPIA) to speed up her numpy code. We got a factor of 40! (Primarily by capitalizing on sparseness of some operators to make the math faster.)

No comments:

Post a Comment