2010-05-25

marginalized likelihood

You can't marginalize over a parameter in the likelihood without a prior because the units are wrong! The likelihood is the probability of the data given the model, and therefore has units of inverse data. If you want to marginalize out some nuisance parameter, you have to multiply the likelihood by a prior probability distribution for that parameter and then integrate. So, as I like to point out, only Bayesians can marginalize.

Adam Myers and I are using marginalization to get the likelihood for parameters of a distribution for a quantity (in this case exoplanet mass), marginalizing out every individual quantity (mass) estimate. You have to marginalize out the individual mass determinations because they are all terribly biased individually, and it is only the underlying or uncertainty-deconvolved distribution that you really care about. More soon, especially if we succeed!

3 comments:

  1. Haha! I was doing exactly the same thing today (although not with exoplanets). I declared: death to stacking analyses! and introduced several nuisance parameters per galaxy before promptly marginalising them all away. And so now I get to infer the scatter in the relations as well as the mean slopes and normalisations :-)

    ReplyDelete
  2. "The likelihood is the probability of the data given the model, and therefore has units of inverse data"

    The likelihood is dimensionless if you define it as the probability rather than the probability density. i.e. multiply it by dD and it's magically dimensionless!

    The real reason you need a prior to marginalise is that that's what the sum rule tells you to do. It has nothing to do with units.

    ReplyDelete
  3. Actually, even if you make the likelihood dimensionless, there are nonetheless dimensional (units) reasons that you can't integrate it without first multiplying by the prior. But I am not claiming that this is in conflict with the sum rule; I am just giving another justification for the sum rule.

    ReplyDelete