2013-08-01

other people's problems

I spent four hours today talking with Gregory Green (CfA), Finkbeiner (CfA), and Schlafly (MPIA) about their Bayesian formalism for inferring a three-dimensional map of the dust in the Milky Way. I opened strong, saying what they were doing couldn't be right. By the end of the four hours, they pretty-much convinced me that what they are doing is right. They do some tricky stuff with what I learned at ExoSAMSI are "interim priors" and I am still working through the last bit of math. I also encouraged them to think about going to much smaller pixels (much higher angular and radial resolution); if they are all Bayesian, they shouldn't mind the spatial priors that will require. All that said, their results are awesome.

On that note, Nina Hernitschek (MPIA) spoke at Galaxy Coffee about Gaussian Process models for quasar light curves and the possibility of using them for reverberation mapping in the photometry alone. This model is a model not just at fine time resolution but literally infinite resolution in the time domain. It is non-parametric in that sense. In principle, the dust map could be also, although I admit that it would be a non-trivial project. In related news, Patel and Mykytyn spent the day working on baby steps towards building Gaussian Process models of multi-band quasar light curves.

Late in the day I continued yesterday's discussions of quasar SED fitting with Lusso (MPIA) and Hennawi. I was filled with ideas, but we more-or-less decided that Lusso's brute-force methods are fine, given the current sample sizes and scientific goals. Brute force grid-search optimization has the great advantage over all other optimization strategies that it is guaranteed to find your best point (at least on your grid). That's useful!

1 comment:

  1. "Gaussian Process models for quasar light curves and the possibility of using them for reverberation mapping in the photometry alone."

    This is a good idea. It gets rid of a lot of the problems with the cross-correlation recipe. There are a few different ways to implement it (e.g. integrate out the infinite-resolution light curve numerically or analytically? The former means many more parameters but faster likelihood). There's also the decisions about the covariance function.

    Pancoast and I have an implementation, as has Zu (Ohio State) and probably others.

    ReplyDelete