[On travel, so posting is irregular; see Rules.]
Astronomers spend a lot of their time estimating their errors
(meaning the variances or standard deviations of the noise contributions to their measurements), as they should! However, often these error analyses can make a lot of unwanted assumptions. For example, in the case of stellar metallicity measurements (the case I am working on with Bovy and Rix), the errors are estimated by looking at the variation across stellar models, where the range of stellar models is both too large (many of these models are in fact ruled out by the data) and too small (many real observed stars are not well fit by any model), the errors estimated by standard error estimation techniques can be either too small or too large, depending on how the modeling disagrees with reality.
In the limit of large amounts of data, any machine-learner will tell you that if your uncertainty variainces matter (and they do), then you must be able to infer them along with the parameters of true interest. That is, when you have a lot of data, your data themselves probably tell you more about your measurement error properties (your noise model) than do any external or subsequent error analyses
. The crazy thing is that it is clear from the informative detail we see in Figure 2 of this paper that the team-reported error variances on the SEGUE metallicity measurements are substantial over-estimates! There might be large biases in these measurements but there simply can't be large scatter. Now how to convince the world?
No comments:
Post a Comment