I have spent a lot of time in my life advocating cross-validation for model selection. That's sad, maybe? But for many reasons, I think it is much better than computing Bayesian evidences or fully-marginalized likelihoods (FMLs on this site!). Today, for the paper Soledad Villar (JHU) and I are writing, I made this figure, which demonstrates leave-one-out cross-validation. Each curve is a different leave-one-out fit, decorated with that fit's prediction for the left-out point. Instructive? I hope so.
No comments:
Post a Comment