Camp Hogg met with David Blei (Princeton) again today to discuss some more the inference generalities that came up last week. In particular, we were interested in discussing what aspects of our planned exoplanet inferences will be tractable and what aspects will not, along with whether we can co-opt technology from other domains where complex graphical models have been sampled or marginalized. Blei's initial reaction to our model is that it contains way too much data to be sampled and we will have to settle for optimization. He softened when we explained that we can sample a big chunk of it already, but still expects that fully marginalizing out the exoplanets on the inside (we are trying to infer population properties at the top level of the hierarchy, using things like the Kepler data at the bottom level) will be impossible. Along the way, we learned that our confusion about how to treat model parameters that adjust model complexity arises in part because that genuinely is confusing.
We also discussed prior and posterior predictive checks, which Blei says he wants to do at scale and for science. I love that! He has the intuition that posterior predictive checks could revolutionize probabilistic inference with enormous data sets. He gave us homework on this subject.
Today was also Muandet's last day here in NYC. It was great to have him. Like most visitors to Camp Hogg, he goes home with more unanswered questions than he arrived with, and new projects to boot!
No comments:
Post a Comment