2012-04-11

generative model, slowness

Yesterday Long coded up the insane-robot censored-data model, and today we reviewed and tested the code. It is super-slow, because it contains a double (or triple, depending on how you look at it) marginalization over nuisance parameters, as many nuisance parameters as data points (more, really). In the afternoon Richards refactored the code and I made some fake data by the same generative model as that generating our likelihood. That's the great thing about a generative model: It gives you the likelihood and a synthetic data procedure all at once. At the end of the day, Long figured out that we can make some different distribution choices and make at least some of the integrals analytic. Better to use an approximate, computable likelihood than a better, incomputable one! We have to get on the good foot because there are only two days of hacking left.

No comments:

Post a Comment