2012-11-27

Bayesian point estimates?

Phil Marshall and I spent the day talking about weak and strong lensing. We sanity checked with Fadely our ideas about finding strong lenses in multi-epoch ground-based imaging like PanSTARRS and LSST. For lunch, we took the advice of 4ln.ch. With Foreman-Mackey we discussed an interesting idea in this Lance Miller paper: If you have N independent measurements of x and you have a correct likelihood function and a correct prior PDF for x, the (expected value of the) mean (over data points) of the N posterior PDFs for x is the prior PDF for x. That is either obvious or surprising. To me: surprising. They use this fact to infer the prior, which is a super-cool idea but I expect to be able to beat it dramatically with our hierarchical methods. In the airport at the end of the day, I contemplated what I would say to the LIGO team at Caltech.

ps: Though that Miller paper is excellent and filled with good ideas, I have one quibble: They have an estimate of the galaxy ellipticity they call Bayesian because it is the posterior expectation of the ellipticity (they even use the word Bayesian in the title of their papers). The Bayesian output is the full likelihood function or posterior PDF not some integral or statistic thereof. A point estimate you derive from a Bayesian calculation is not itself Bayesian; it is just a regularized estimate. Bayesians produce probability distributions not estimators! Don't get me wrong: I am not endorsing Bayes here; I am just sayin': Point estimates are never Bayesian, even if Bayes was harmed along the way.

1 comment:

  1. What's even worse is when people take the maximum of the posterior probability density as the point estimate. That's not even invariant under reparameterization (in contrast to integral quantities like the mean or median).

    ReplyDelete