I had a valuable chat in the morning with Adrian Price-Whelan (Princeton) about some hypothesis testing, for stellar pairs. The hypotheses are: unbound and unrelated field stars, co-moving but unbound, and comoving because bound. We discussed this problem as a hypothesis test, and also as a parameter estimation (estimating binding energy and velocity difference). My position (that my loyal reader knows well) is that you should never do a hypothesis test when you can do a parameter estimation.
A Bayesian hypothesis test involves computing fully marginalized likelihoods (FMLs). A parameter estimation involves computing partially marginalized posteriors. When I present this difference to Dustin Lang (Toronto), he tends to say “how can marginalizing out all but one of your parameters be so much easier than marginalizing out all your parameters?”. Good question! I think the answer has to do with the difference between estimating densities (probability densities that integrate to unity) and estimating absolute probabilities (numbers that sum to unity). But I can't quite get the argument right.
In my mind, this is connected to an observation I have seen over at Andrew Gelman's blog more than once: When predicting the outcome of a sporting event, it is much better to predict a pdf over final scores than to predict the win/loss probability. This is absolutely my experience (context: horse racing).