Kate Storey-Fisher, Abby Williams, and I spent some time discussing unpublished work that relies heavily on calculations of the Bayesian evidence. Bayesian evidence—what I call the “fully marginalized likelihood”—relates to the volume of the posterior in parameter space. It is generally extremely sensitive to the width of the prior pdf, since if you are comparing two models with different parameterizations, the numbers you get depend on how you normalize or scale out the units of those parameter-space volumes. Indeed, you can get any evidence ratios you want by tuning prior pdf widths. That's bad if you are trying to conclude something, scientifically! Bayesian inference is only principled, imho, when you can quantitatively state the prior pdf that correctly describes your beliefs, prior to seeing the new data. And even then, your evidence is special to you; any other scientist has to recompute from scratch.
No comments:
Post a Comment