We had a star-studded group meeting today. It kicked off with Charlie Conroy (Harvard) talking about some of his recent projects. In one, he looks at the time dependence of pixel brightnesses in M87, because the long-period variables in the stellar population lead to long-period variations in brightness. In principle these variations are a function of stellar population age and density. He showed data from a huge but under-exploited HST program. In another project, he is working on varying unknown physical properties of atomic transitions within a stellar atmosphere model to make an interpretable but data-driven model for stellar atmospheres. This is a great project, but involves coding up all one's prior beliefs about what can vary and how and in what ways. That's a very complicated prior pdf! In another, he discusses the limits of chemical tagging (with Yuan-Sen Ting, MPIA, with whom I will be working this summer. In this project, they find that even a small change in the precision with which chemical abundances can be measured might have a huge impact on any tagging project.
In the second half of group meeting, Andrew Gordon Wilson (CMU) spoke about his new work on kernel learning, in which he optimizes the likelihood of a Gaussian Process in which the kernel is represented as a mixture of Gaussians in spectral space. He has some amazing demos which show that the kernel learning gets a very different covariance matrix than the empirical covariance, which is highly relevant to modern cosmology (where the empirical covariance is all we ever use!). He also talked about some important philosophy about model complexity: For every simple model that works well (in a Bayesian sense), there are other, more complicated models that will always work better (also in that same Bayesian sense). This plays well with my disagreement with all the Mackay-like arguments that Bayes encapsulates Occam's Razor. It just doesn't!