The day started with tutorials on Bayesian reasoning and inference by Brewer and Marshall. Brewer started with a very elementary example, which led to lots of productive and valuable discussion, and tons of learning! Marshall followed with an example in a Jupyter notebook, built so that every person's copy of the notebook was solving a slightly different problem!
Baron and I got the full optimization of our galaxy deprojection project working! We can optimize both for the projections (Euler angles and shifts etc) of the galaxies into the images and for the parameters of the mixture of Gaussians that makes up the three-dimensional model of the galaxy. It worked on some toy examples. This is incredibly encouraging, since although our example is a toy, we also haven't tried anything non-trivial for the optimizer. I am very excited; it is time to set the scope of what would be the first paper on this.
In the evening, there was a debate about model comparison, with me on one side, and the tag team of Brewer and Marshall on the other. They argued for the evidence integral, and I argued against, in all cases except model mixing. Faithful readers of this blog know my reasons, but they include, first, an appeal to utility (because decisions require utilities) and then, second, a host of practical issues about computing expected utility (and evidence) integrals, that make precise comparisons impossible. And for imprecise comparisons, heuristics work great. Unfortunately it isn't a simple argument; it is detailed engineering argument about real decision-making.