My only research today was a tiny bit of writing in Christina Eilers's paper on kinematically measured spiral structure in the Milky Way disk.
I read and commented on some documents today related to the calibration of the Local Volume Mapper part of the SDSS-V family of projects. The project is an intensity-mapping project to observe the interstellar medium in the Milky Way and nearby galaxies, using one spectrograph but many different telescopes (with different apertures). It's clever! The question is: Does this project need calibration telescopes in addition to the science telescope? My position is that they don't. Well, calibration telescopes might be very useful for debugging things and understanding things! But at the end of the day, calibration will be self-calibration I bet. I'm offering very good odds.
One point is the following: When you have an imager or a spectrographic imager, you have to calibrate so that every exposure has calibration consistent with every other exposure, and every pixel has calibration consistent with every other pixel. Good! Now imagine you introduce a calibration telescope. Now you have to do the same for the calibration system, and you have to understand the cross-calibration between the systems (science and calibration). So it greatly increases the difficulty of the task, introduces new variables, and (usually) reduces precision of the final results. The self-consistency of the science data (provided that it is properly taken) is always the strongest constraint on calibration. See, for example, Planck, WMAP, SDSS, PanSTARRS, and so on.
In a very low research day, Megan Bedell (Flatiron) and I discussed proposals for inter-disciplinary and inter-group spectroscopic surveys of very bright stars. These would give general information about abundances, binarity, activity, variability, and suitability for further study (by, say, extreme precision radial-velocity projects). She has been thinking about target selection, and we discussed ways to make it very very simple. My position (as my loyal reader knows) is that it is better to be simple and somewhat inefficient than it is to be complex and very efficient. For legacy value, anyway, which is the whole point of a survey like this.
Friday mornings in NYC usually start with a free-form meeting on the 11th floor of Flatiron. Today Spergel, Johnston, Gandhi, and Price-Whelan were all at the table. We began by discussing some of the accomplishments that have set the tone and agenda of the data-group and dynamics-group activities at Flatiron. Then we started to discuss what I call The Snail: The phase spiral found by Antoja in the ESA Gaia DR2 data. As my loyal reader knows, we are trying to use it to infer the dynamical properties of the Milky Way disk. And we would also like to use it to infer things about events in the recent past of the Milky Way. We discussed the possibility (suggested by the observations) that The Snail is not just one event but really two. It looks different when you look at stars with different angular momenta (different guiding centers, and hence different histories in their orbits around the Milky Way). In general the question is: Do Snails created in simulated galaxies look anything like the Snail we have?
It was a fun morning with Zach Martin (NYU) and Teresa Huang (NYU), talking adversarial attacks against astronomical machine-learning methods. In this context I mentioned the EURion. I said that money can't be photocopied. They didn't believe me. Am I a conspiracy theorist? Yes! But I'm right on this one. We went to the photocopier room and demonstrated the awesome that is secret agreements between electrostatic hardware companies and Western governments. But then Martin said “They went through all that trouble to stop what? Who photocopies money? What kind of a stupid scam is that?”
[It's been a hard 2020. Apologies for my violations of the Rules to the right. I can assure you that it has been for a good set of reasons.]
I spent part of my research day listening to Adrian Price-Whelan (Flatiron) talk out a few different job talks. His challenge is to explain why we can learn things about the dark matter with streams. And explain why we can't! That is, streams are complicated.
In the latter part of our conversation I asked him if he could find APOGEE exposures of accelerating stars that are accelerating so strongly (from a binary orbit, say) that we could measure the velocity difference between the first half of the exposure and the second half. Why? Because the APOGEE observations are taken “up the ramp”; this makes it possible to split them after the fact. Any up-the-ramp imager takes data that can be sub-framed after the fact, which leads to all sorts of possible time-domain projects! Let's figure that out.
In Stars & Exoplanets Meeting today, we had a discussion about getting ready for ESA Gaia EDR3. What should we be doing? And Megan Ansdell (Flatiron) told us about using shallow-ish convolutional neural networks to find stellar flares in the presence of astrophysical noise.