So many projects! I love my summers in Heidelberg. I spent time working through the figures that would support a paper on M-dwarf stars with The Cannon with Jessica Birky (UCSD) today. She has run The Cannon on a tiny training set of M-dwarf stars in the APOGEE data, and it seems to work (despite the size and quality of our training set). We are diagnosing whether it all makes sense now.
With Christina Eilers (MPIA), Hans-Walter Rix (MPIA) and I discussed the amazing fact that she can optimize (a more sophisticated version of) The Cannon on all internal parameters and all stellar labels in a single shot; this is a hundred-thousand-parameter non-linear least-square fit! It seems to be working but there are oddities to follow up. She is dealing with the point that many stars have bad, missing, or noisy labels.
With Kareem El-Badry (Berkeley), Rix and I worked through the math of going from an SB2 catalog (that is, a catalog of stars known to be binary because their spectra are better fit by superpositions of pairs of stars than by single stars) through to a population inference about the binary population. This project meshes well with the plans that Adrian Price-Whelan (Columbia) and I have for the summer.
With Ana Bonaca (Harvard), I discussed further marginalizations in her project to determine the information content in stellar streams. She finds that the potential form and the progenitor phase-space information are very informative; that is, if we relax those to give more freedom, we expect to find that the streams are less constraining of the Galactic potential. We discussed ways to test this in the next few days.
With Stephen Feeney (Flatiron) and Daniel Mortlock (Imperial) I discussed the possibility of writing a paper about the Lutz-Kelker correction (don't do it!) and posterior probabilistic catalogs (don't make them!) and what scope it might have. We tentatively decided to try to put something together.
With Matthias Samland (MPIA) and Jeroen Bouwman (MPIA) I discussed their ideas to move the CPM (which we used to de-trend Kepler and K2 light curves) to the domain of direct detection of exoplanets with coronographs. This is a great idea! We discussed the way to choose predictor pixels, and the form that the likelihood takes when you marginalize out the superposition of predictor pixels. This is a very promising software direction for future coronograph missions. But we noticed that many projects and observing runs might be data-limited: People take hundreds of photon-limited exposures instead of thousands of read-noise-limited exposures. I think that's a mistake: No current results are, in the end, photon-noise limited! We put Samland onto looking at the subspace in which the pixel variations live.
I love my job!
Given that the LIGO catalog is already "probabilistic" (see: LVT151012), what is your objection to such catalogs? Or should I just read the paper when it comes out? (And what about https://arxiv.org/abs/1211.5805 ?)
ReplyDeleteI have somewhat changed my view since that paper, but the key thing is: It is very hard to use a probabilistic catalog that is a representation of a posterior PDF (as opposed to a likelihood function). We have various papers about how to use them (see papers with Foreman-Mackey and paper with Meyers), but it isn't easy to use them correctly.
Delete