Fadely, Willman, and I have a star–galaxy separation method (hierarchical Bayesian discrete classifier) for LSST that can be trained on untagged data; that is, we do not need properly classified objects (any truth table
) to learn or optimize the parameters of our model. However, we do need a truth table to test whether our method is working. Not surprisingly, there are not a huge number of astronomical sources at faint magnitudes (think 24) where confident classification (that is, with spectroscopy or very wide-wavelength SED measurements) has been done. So our biggest problem is to find such a sample for paper zero (the method paper). One idea is to run on a bunch of data and just make the prediction, which is gutsy (so I like it), but really who will believe us that our method is good if we haven't run the test to the end?
2011-09-06
star-galaxy separation; testing
Subscribe to:
Post Comments (Atom)
The LSST image simulation effort has both reduced images and a "truth table" of star v. galaxy. Goes down to r=28.
ReplyDeletehttps://www.lsstcorp.org/sciencewiki/images/DC_Handbook_v1.1.pdf
What about (1) making predictions for the faint sources; but for verification/testing the method (2) running your method on brighter sources where confident classification HAS been done, but adding noise etc. before applying your method so as to simulate sources at 24th mag? (Maybe you've already done #2?)
ReplyDelete