Lang and I got up early and pair-coded the
arbitrarily covariant and heterogeneous errors in both dimensions case of fitting a line, with outlier rejection. I then showed this figure in the afternoon in the MPIA Hauskolloqium. My points of greatest emphasis were:
- If you want to say you have the
best fitmodel, then your model parameters better optimize a justified, scalar objective function. I mean
scalarboth in the sense of
single-valuedand in the sense of
respecting relevant symmetries.
- When you can create a generative model for your data, inference proceeds by maximizing the likelihood (or, better, sampling the posterior probability distribution function). You have no freedom about this; fitting does not involve much choice, at least at the conceptual level.
- Markov-Chain Monte Carlo methods—in particular with the Metropolis algorithm—are very easy to implement and run, and they optimize even non-linear problems, explore multiple local minima, and automatically provide marginalizations over nuisance parameters.