2017-12-22

testing tacit knowledge about measurements

In astrometry, there is folk knowledge (tacit knowledge?) that the (best possible) uncertainty you can obtain on any measurement of the centroid of a star in an image is proportional to the with the size (radius or diameter or FWHM) of the point-spread function, and inversely proportional to the signal-to-noise ratio with which the star is detected in the imaging. This makes sense: The sharper a star is, the more precisely you can measure it (provided you are well sampled and so on), and the more data you have, the better you do. These are (as my loyal reader knows) Cramer–Rao bounds. And related directly to Fisher information.

Oddly, in spectroscopy, there is folk knowledge that the best possible uncertainty you can obtain on the radial-velocity of a star is proportional to the square-root of the width (FWHM) of the spectral lines in the spectrum. I was suspicious, but Bedell (Flatiron) demonstrated this today with simulated data. It's true! I was about to resign my job and give up, when we realized that the difference is that the spectroscopists don't keep signal-to-noise fixed when they vary the line widths! They keep the contrast fixed, and the contrast appears to be the depth of the line (or lines) at maximum depth, in a continuum-normalized spectrum.

This all makes sense and is consistent, but my main research event today was to be hella confused.

No comments:

Post a Comment