Our main progress on The Cannon, paper 1, was to go through all the outstanding discussion points, caveats, realizations, and notes, and use them to build an outline for a Discussion section at the end of the paper. We also looked at final figure details, like colors (I am against them; I still read papers printed out on a black-and-white printer!), point sizes, transparency, and so on. We discussed how to understand the leave-one-out cross-validation, and why the leave-one-star-out cross-validation looks so much better than the leave-one-cluster-out cross-validation: In the latter case, when you leave out a whole cluster, you lose a significant footprint in the stellar label-space in the training data. The Cannon's training data set is a good example of something where performance improves a lot faster than square-root of N.
No comments:
Post a Comment