The close reader (there's a reader, I know, but a close reader?) of this blog will have noticed that CampHogg is in a transition. We used to think that projects producing "catalogs" from telescope data ought to produce likelihood information. Now we think this is probably impossible in general, and we will have to live with (at best) posterior probability information, under some priors. We discussed this in group meeting, in particular related to Malz's project and LSST. The photometric redshift system LSST expects to create using cross-correlations (with, say, quasars) will (if the method works) produce posterior probability distribution function estimates (not values, but estimates, which is scary).
Key questions we identified include the following: Understand what the effective priors are for that procedure. Understand what the deep assumptions are behind the cross-correlations; I think they have to be at large scales to work properly (linear bias and all that). And understand whether it has been demonstrated to work, empirically. I got all confused at the end of group meeting about the fact that the method generates noisy estimates of a pdf, which is a strange meta issue. What do you do about a noisily known posterior pdf?