Bolton, Lang, and many others showed up today for a two-day target-selection hack-fest at NYU, hosted by Jeremy Tinker (NYU). I spent as much time as I could distracting them away from their goals. In particular, I got Bolton to recap his very nice results on whether you should expand a sample to more objects or go deeper on the objects you already have. His basic result (quantitative result, with not very many assumptions) is that astronomers tend to go too deep, when they should be expanding their samples. At least when you have a well-defined quantity that you want to measure (the population mean and variance of quantity X, for example), you usually do better by getting more minimal data on a larger sample of objects. It is not the way to discover or explore, but it is the way to measure. I encouraged him to publish this. It is relevant to many things we work on.
If you, hypothetically, had the choice between (a) observing 400 deg^2 of the sky, looking for BB polarization indicating primordial GW and (b) observing the entire sky for 1% as much time per deg^2, which would you choose? 100x as many modes at 1/10 the S/N. The BICEP2 team says their strategy (i.e., a) was superior. Do you understand why?
ReplyDelete