unjustified uncertainty estimates

Schiminovich came to NYU and we got closer to having a complete set of measured GALEX photometry for all SDSS point sources. We obtained these photometric measurements by performing aperture photometry; today we worked out a heuristic strategy for inflating the error bar when a measurement is likely being affected by another nearby source. The idea is that we want—for our science goals—to report a flux and uncertainty for every known source; we do not want to just cut out or mask or flag sources that might be bad or contaminated. Our method is very empirical and has no precise justification, but unless we think of something more clever (or go whole-Hogg and perform the photometry by simultaneously fitting all GALEX pixels with a self-consistent model of the sky), something heuristic like this will go into our planned data release.

1 comment:

  1. Most uncertainty estimates in the literature are unjustified. It's common practice to assume some model for the prior probabilities (priors AND sampling distributions) that is a pathetic representation of anyone's prior beliefs. E.g. Gaussian noise.

    I think this comes down to the misconception that probability can deal with "random" errors but not "systematic" errors. There aren't really two kinds of errors. Errors are just "the amount by which your data are off relative to the thing you were trying to measure". There's no point doing an analysis and quoting a "formal statistical error" when you KNOW that p(D|theta) was a crap model of your prior beliefs.