2016-06-15

#ISBA2016, day three

Today I saw some good talks on approximate Bayesian computation (ABC) or likelihood-free inference (as it were). One highlight was Ewan Cameron's talk on epidemiology. He gave me two ideas that I could translate immediately into my own work. The first is a definition of “Indirect Inference”, which (if I understand correctly) is exactly what is being done in cosmology that I have been railing against in my Inference-of-Variances project. The second is the (very simple: meaning great!) idea that one can trade off long-range-ness (as it were) of a Gaussian Process by complexifying the mean function: Put in a non-trivial, flexible mean function and the variance has less to do. That could be valuable in many of our contexts. One has to be careful not to use the data twice; I believe Cameron handled this by splitting the data into two parts, one of which constrained the mean function, and one of which constrained the GP.

Other highlights included Wentao Li showing that he could adjust ABC to give precise results in finite time when the data get large (unadulterated ABC gets impossible as the data get large, because the distance metric thresholds generally have to get smaller and acceptance ratios go to zero). Edward Meeds (in a move similar to things mentioned to me by Brendon Brewer) separated the parameters of the problem from the random-number draws (simulations usually have random numbers in their initial conditions, etc); conditioned on the random-number draw, the code becomes deterministic, and you can auto-differentiate. Then: optimization, Hamiltonian, whatever! That's a good idea.

No comments:

Post a Comment