2009-08-26

sequential sampling (Gibbs?)

In what little time got spent today on research, I switched over my black-hole against bulge-mass power-law fitting code to perform its movements in parameter space in one dimension at a time, sequentially looping over the available dimensions. This permitted me to set the step sizes objectively, and seems to have improved convergence. We have been learning that although MCMC is a super-useful and super-simple tool, in practice there are a lot of considerations involved in making sure it is running efficiently and converging; there are no extremely straightforward guarantees. This deserves a detailed write-up at some point in the near future.

1 comment:

  1. [Commenting late on this.]

    Presumably if you are setting step-sizes you are still doing Metropolis. For historical reasons “Gibbs sampling” specifically means resampling from conditional distributions, rather than more generally applying updates to one dimension at a time.

    You could consider slice sampling each dimension. This has the same technical requirements as simple Metropolis: you just need to be able to evaluate the distribution up to a constant at given points. Slice sampling automatically finds a suitable distance to move and, unlike Metropolis, always moves. Strictly there is still a step-size parameter, but it is much less important than in Metropolis.

    As well as running efficiently and converging, the code has to be correct so the chain is converging efficiently to the right thing(!). Sanity checking inferences with datasets drawn from the prior can check (but not prove) correctness as well as convergence properties.

    ReplyDelete