Today was a one-day workshop on multi-messenger astrophysics to follow yesterday’s one-day workshop on physics and machine learning. There were interesting talks and discussions all day, and I learned a lot about facilities, operations, and plans. Two little things that stood out for me were the following:
Collin Capano (Hannover) spoke about his work on detecting sub-threshold events in LIGO using coincidences with other facilities, but especially NASA Fermi. He made some nice Bayesian points about how, at fixed signal-to-noise, the maximum possible confidence in such coincidences grows with the specificity and detail (predictive power) of the event models. This has important consequences for things we have been discussing at NYU in our time-domain meeting. But Capano also implicitly made a strong argument that projects cannot simply release catalogs or event streams: By definition the sub-threshold events require the combination of probabilistic information from multiple projects. For example, in his own projects, he had to re-process the Fermi GBM photon stream. Those considerations—about needing access to full likelihood information—has very important implications for all new projects and especially LSST.
Daniela Huppenkothen (UW) put into her talk on software systems some comments about ethics: Is there a role for teaching and training astronomers in the ethical aspects of software or machine learning? She gave the answer “yes”, focusing on the educational point that we are launching our people into diverse roles in science and technology. I spoke to her momentarily after her talk about telescope scheduling and operations: Since telescope time is a public good and zero-sum, we are compelled to use it wisely, and efficiently, and transparently, and (sometimes) even explainably. And with good legacy value. That’s a place where we need to develop ethical systems, at least sometimes. All that said, I don’t think much thought has gone into the ethical aspects of experimental design in astronomy.