Iain Murray commented yesterday that I should look at this paper and this paper, both by Salakhutdinov, Roweis, and Ghahramani, about optimizing marginalized likelihoods. Standard practice is expectation-maximization, but the upshot of these two papers (if I understand them correctly) is that gradient descent can be better if the latent variables (the ones being marginalized out) are badly determined. That's relevant to Dalya Baron and me, deprojecting galaxies, and to cryo-EM.
No comments:
Post a Comment