2010-08-03

extremely extreme deconvolution

I demonstrated with our fake exoplanet data that my exoplanet deconvolution methodology works. This is even more extreme than Bovy's extreme deconvolution code, because it can use data with arbitrarily complicated uncertainties—not just heteroscedastic but also non-Gaussian, bimodal, or what have you. It ain't fast.

In related news, I figured out (the hard way) that if you are sampling the posterior probability distribution function but keeping track of the likelihoods, the best likelihood you see in the sampling is not necessarily all that close to the true maximum of the likelihood function. This is obvious in retrospect, but I was confused for a while. I think it is true that if your prior is not extremely informative, you ought to eventually see the maximum-likelihood point, but you might have to wait a long time, especially if your likelihood is not very sharply peaked.

1 comment:

  1. Not seeing the maximum likelihood in samples is also a dimensionality issue. Imagine you have a broad flat prior and likelihood exp(-0.5\sum_d x_d^2) in 100 dimensions. The maximum likelihood is 1, but the maximum you'll see in a million independent samples from the posterior is more like 1e-10. A maximum is not typical in high dimensions; you'll never go near it. (Related: an ML or MAP solution can be a poor initialization for MCMC!)

    ReplyDelete