As the Loyal Reader (tm) knows, Foreman-Mackey and I have been working on exoplanet detection in Kepler data. Today we decided to switch gears from seeing planet detection as a parameter estimation problem (do we find a planet with sufficiently well-measured depth to be declared Real?) to a hypothesis test problem (does the hypothesis "there is a planet here" beat the null hypothesis "there isn't"?). We are using a Gaussian Process to model the stochastic variations of the star and a rigid but good exoplanet model to model the mean expected behavior of the stellar flux subject to transits. We did a bit of pair-coding, even.
By the end of the day, Foreman-Mackey could show that our detection capability is likely to be an extremely strong function of the hyperparameters of the Gaussian Process, and that this function will very likely vary strongly from star to star. In particular, we expect that sufficiently variable or faint stars will never yield hyperparameter settings sensitive enough to detect Earth-like planets (the planets we need to become internet famous). That will have significant impacts on our decision-making and experimental design. We have yet to insert all this hypothesis testing into a brute-force search loop, in part because our code is so damned slow (but righteous!).
IMO: The hypothesis "there isn't a planet" is actually false based on prior info. There's going to be at least a rock.
ReplyDeleteI tend to not like priors with point masses in them especially if the alternative part doesn't put a lot of mass "near" the point mass. e.g. 0.5*(delta function) + 0.5*(uniform) is usually dumb (and implicit in a lot of "model selection" work). Cauchy I could go along with.
Let's make sure that Bekki Dawson (CfA/Berkeley) gets cred for setting us on this path too.
ReplyDelete