Today Brendon Brewer (UCSB) showed up for a week of probabilistic reasoning. He sat in on the weekly meeting of Goodman, Hou, Foreman-Mackey, and I where we discuss all things sampling and Bayes and stellar oscillations (these days). In the afternoon, we tried to get serious about what we would try to accomplish by the end of the week. One thing Brewer and we are both interested in is the performance of samplers in situations where the number of parameters gets large. Another is adapting Foreman-Mackey's emcee code to make it capable of measuring the evidence integral (fully marginalized likelihood).
In the end, we formulated a problem that is amusing to all of us and also a good test of samplers in the large-numbers-of-continuous-parameters regime: Modeling an astronomical image of a crowded field. We figured out three potential publications: The first is a benchmark for samplers working in the astronomical context. The second is a python wrap and release of Brewer's simulated-tempering-like sampler for dealing with multimodal distributions. The third is a short paper that shows that you can figure out the brightness function of stars below the brightness levels at which you can reliably resolve them in a crowded field. The latter project fits into the Camp Hogg brand of measuring the undetectable
.
Let me know if you want some real-world applications of this. PHAT's got some interesting UV imaging applications in the bulge of M31. Most of the FUV/NUV light is probably extreme HB stars, but they all have about the same magnitude, are faint, and make up almost all the light, so they're INSANELY crowded (i.e., imagine a delta function luminosity function). Basically, you can detect either none of them, or all of them, depending on your angular resolution (and until we get a 30m in space, we're stuck in the former case). So, interesting but undetectable -- right in Camp Hogg's sweet spot.
ReplyDelete