2015-02-04

doubly intractable group meeting

At group meeting, Huppenkothen introduced us to methods for sampling "doubly intractable" Bayesian inference problems. The problem (and solution) in question is a variable-rate Poisson problem, where you have Poisson-distributed objects (like photons) arriving according to a mean rate that is varying with time, where that rate function is drawn from another process, in this case a Gaussian Process (taken through a function to make it non-negative). The best methods at the present day involve instantiating a lot of additional latent variables and then doing something like Gibbs sampling in the joint distribution of the parameters you care about and the newly introduced latent variables. We didn't understand everything about these complicated methods, but one of the authors, Iain Murray (Edinburgh) will be visiting the group next month, so we plan to make him talk.

Angus arrived for a few days of hacking and we talked about our super-Nyquist asteroseismology projects. We also started email conversations with the authors of this paper and this paper, both of which are impressive for their pedagogical presentation (as well as their results).

5 comments:

  1. Ben Wandelt points us (by twitter) at these two papers: http://arxiv.org/abs/0911.2496 http://arxiv.org/abs/0911.2493

    ReplyDelete
  2. And I would partially counter Ben's point to note that one can also devise bespoke methods for solving this sort of problem that don't require grids: http://www.stat.duke.edu/~var11/pubs/rao13a.pdf

    ReplyDelete
  3. Noob question. If one separates the parameters for the poisson rate into parameters about the 'normalisation' and those about the 'shape', isn't the likelihood for the shape parameters just the same as in density estimation?

    ReplyDelete
    Replies
    1. Answer to my question: yes it's equivalent but you still need to know the normalisation to do inference. However I'd be more inclined to use something like a mixture model where the normalistion is known rather than a GP>

      Delete
  4. It sounds like you were discussing Ryan Adams's PhD work that I was involved with. Vinayak Rao (who Ewan cites above) generalized this work, among other things, in his PhD. Brendon is right, density estimation and point process modelling are closely related, and Ryan's thesis covered both.

    For light curves (densely-sampled data, 1d if there aren't other covariates in the covariance) the above stuff is probably overkill. Binning (while often evil) seems fine here and removes the double-intractability. FFTs (with suitable padding, and with some restrictions on the covariance) can speed up GP computations. Alternatively one could try converting to a Gauss-Markov process. Or using the techniques Hogg's recently been involved with. (It will be interesting to compare.) I assume the papers Wandelt use FFTs. After binning, standard MCMC methods apply, ideally taking into account work on making the parameters of the covariance mix.

    I look forward to seeing what specifically needs to be done, and discussing further.

    ReplyDelete