Today was open-discussion day at Not Ready for Gaia. We put subjects on the board and then voted. To my great pleasure, the quantitative, probabilistic fitting of cold streams in the Milky Way made the cut. The most impressive thing that emerged (for me) when we discussed the relationship of what we should do according to the work of Sanders (Oxford) to what we did do in this paper is that we can describe both methods with a single, simple generative-model framework, with the Sanders approximation far better than ours. As I perhaps mentioned in a previous post, one interesting thing (to me) about the group of people in the room is that there was essentially no disagreement about how inference ought to be done. Probabilistic modeling has won. Generating the measurements has won. Perhaps not surprising, since it all more-or-less originates with Laplace working on the Solar System.
In a brief diatribe about computational complexity, I argued that the computational cost of doing dynamical modeling of the Milky Way will probably scale as the cube (or worse) of the number of stars: There is always computation that needs to be done for each star, the number of orbits or tori that need to be computed probably scales roughly as the number of data points, and our ambitions about the complexity (freedom) of the models we are fitting will also rise with the data size. This is just made up, but I doubt it is very far wrong.