Is the snail two snails?

Friday mornings in NYC usually start with a free-form meeting on the 11th floor of Flatiron. Today Spergel, Johnston, Gandhi, and Price-Whelan were all at the table. We began by discussing some of the accomplishments that have set the tone and agenda of the data-group and dynamics-group activities at Flatiron. Then we started to discuss what I call The Snail: The phase spiral found by Antoja in the ESA Gaia DR2 data. As my loyal reader knows, we are trying to use it to infer the dynamical properties of the Milky Way disk. And we would also like to use it to infer things about events in the recent past of the Milky Way. We discussed the possibility (suggested by the observations) that The Snail is not just one event but really two. It looks different when you look at stars with different angular momenta (different guiding centers, and hence different histories in their orbits around the Milky Way). In general the question is: Do Snails created in simulated galaxies look anything like the Snail we have?



It was a fun morning with Zach Martin (NYU) and Teresa Huang (NYU), talking adversarial attacks against astronomical machine-learning methods. In this context I mentioned the EURion. I said that money can't be photocopied. They didn't believe me. Am I a conspiracy theorist? Yes! But I'm right on this one. We went to the photocopier room and demonstrated the awesome that is secret agreements between electrostatic hardware companies and Western governments. But then Martin said “They went through all that trouble to stop what? Who photocopies money? What kind of a stupid scam is that?”


splitting exposures after the fact

[It's been a hard 2020. Apologies for my violations of the Rules to the right. I can assure you that it has been for a good set of reasons.]

I spent part of my research day listening to Adrian Price-Whelan (Flatiron) talk out a few different job talks. His challenge is to explain why we can learn things about the dark matter with streams. And explain why we can't! That is, streams are complicated.

In the latter part of our conversation I asked him if he could find APOGEE exposures of accelerating stars that are accelerating so strongly (from a binary orbit, say) that we could measure the velocity difference between the first half of the exposure and the second half. Why? Because the APOGEE observations are taken “up the ramp”; this makes it possible to split them after the fact. Any up-the-ramp imager takes data that can be sub-framed after the fact, which leads to all sorts of possible time-domain projects! Let's figure that out.

In Stars & Exoplanets Meeting today, we had a discussion about getting ready for ESA Gaia EDR3. What should we be doing? And Megan Ansdell (Flatiron) told us about using shallow-ish convolutional neural networks to find stellar flares in the presence of astrophysical noise.


NASA funding white paper

[I have been off the grid doing important things. Apologies to my loyal reader.]

I spoke to Megan Bedell (Flatiron) about representing Flatiron next week at the Terra Hunting Experiment collaboration meeting in Cambridge UK next week. I think we see eye-to-eye on all things. In general, we at Flatiron are for transparency in operations, openness, data and code releases, and building legacy value.

I spent most of my research time responding (at the very last minute) to this NASA Request For Information. I wrote in response a white paper about supporting methods and software for development of blue-sky methods that might enable qualitatively new missions and capabilities. My (very hastily written; my apologies!) white paper is here. Comments greatly appreciated (even though it is too late for the RFI itself).


spiral density perturbation

I had a call today with Eilers (MIT) and Rix (MPIA) about making self-consistent perturbations to a simple galaxy disk model and interpreting kinematic signatures therein. We have been doing this for months but there are still some conceptual issues that are difficult. Like can we really confine our perturbation to the disk plane, or do we have to give it non-trivial three-dimensional structure. We don't all agree, but we are getting closer. It turns out: Dynamics is hard!


calibration of spectrographs

I was barely present at work today! Things going on. But the first cohort of pre-doctoral fellows at Flatiron completed this week, and they gave an amazing set of talks, which spanned a huge range of science. What a pleasure. My (biased) favorite is Lily Zhao (Yale), who has potentially revolutionized how spectrographs will be calibrated. As my loyal reader knows!


stellar survey

Bedell (Flatiron) and I were on the Terra Hunting Experiment science call where we discussed the idea that the whole extreme precision radial-velocity (EPRV) community might collaborate on doing some big target-selection surveys of relevant bright stars. Different surveys will want to make different choices, but we all want the same kinds of input data to make those choices. So maybe we should just band together and observe the heck out of the possible targets (bright main-sequence stars)? If you are in the community and you want in, send us email!


talking about the future of NASA funding

My only research today was a conversation with Dustin Lang (Perimeter) NASA funding programs. I am thinking about responding to This call for information.




It was a low-research day (job season) but I worked a bit with Lily Zhao (Yale) on interpolation methods and comparing interpolations. This for our hierarchical, non-parametric wavelength calibration method.


non-parametric and hierarchical

On the flight home from #AAS235, I did some writing in a paper by Lily Zhao (Yale) about spectrograph (wavelength) calibration. I'm very excited about this project; we removed all dependence on polynomials and other kinds of strict functional forms. We went non-parametric. But of course this greatly increases the degrees of freedom of the fitting or interpolation of the calibration data. So when we do this, we also have to go hierarchical; we have to restrict the calibration freedom using the data. That is, we don't have any strict functional form for the calibration of the spectrograph, but we require that the calibration solution we find lives in the space of solutions that we have seen before. That is, if you increase the freedom by going non-parametric, you need to restrict the freedom by going non-parametric. (The results look incredible.)


#AAS235, day 4 and #hackaas

Today was Hack Together Day #hackaas at #AAS235. We computed that this is the eighth winter AAS meeting to have a hack day, making it (AAS Hack Together Day) one of my scientific accomplishments of the decade. At the hack day, the main thing I did was hack on hack day, working with Jim Davenport (UW) to brainstorm things we can do to keep the event fresh, and keep us experimenting with it. I also had a great conversation with Brigitta Sipocz, Geert Barentsen, and others about ways we can use our hacking and design thinking to support a reduction in CO2 emissions by astronomers and academics in general. Related to my conversations of yesterday.

But many great things happened in the Hack Together Day. Too many to list here. Look at the wrap-up slides to get a sense of the range and depth of the projects. So many people learned a lot and did a lot. I'm proud, which is a sin, apparently.


remote meetings

A highlight of today was a long meeting with Chris Lintott (Oxford) covering many subjects. But he told me about dot-dot-astronomy, which is a fully-remote reboot they are working on for the niche but extremely influential dot-astronomy meetings. The idea is to go fully remote—all participants remote—but then change the meeting expectations and structure to respect that. The idea is: Maybe not try to do remote meetings so they are just as good as face-to-face meetings, but to try to do remote meetings so they are something very different from face-to-face meetings. That seems like a great idea. Let's re-frame our goals. We have to do something about what we are doing to this planet.


#AAS235, day 2

My personal life relented slightly and I got to Hawaii for a bit of the 235th Meeting of the American Astronomical Society. It is great to see the whole community (or a very large part of it) in one place at one time; I'm still a believer in these meetings, after all these years (I've been attending pretty regularly since 1994). Oh no, has this become “old-fogey research blog”?

Because I arrived today I only saw a few talks, one of which was my student Storey-Fisher (NYU), who explained how we can estimate the two-point correlation function without binning the data into bins. She did a good job of summarizing the benefits, which are legion: We lower the bias and the variance over the traditional methods, and we can work in function spaces that are appropriate to our science questions, for just two examples. I can't wait to be submitting this paper.

Her talk was followed by an excellent talk by Shajib (UCLA) about gravitational lensing and the Hubble-Constant controversy. He showed that the lensing results are falling in line with the late-time supernova-based Hubble Constant measurements, not the CMB and BAO measurements. And his biggest systematic in his time-domain analyses is (as expected) the foreground “mass sheet” degeneracy. He is getting close to achieving one of the dreams of this field (that I have had with Phil Marshall, for example), which is to automate the fitting of non-trivial strong gravitational lens systems, including lensing galaxy, and multiple source galaxies. Beautiful stuff.

And at this meeting there was so much more, almost infinitely more!


adversarial attacks on linear models

I got some work done today on my project with Soledad Villar (NYU) to understand the differences between discriminative and generative models. I wrote code to make L2-normalized and single-pixel (or sparse) attacks on the discriminative model. Everything is linear, so these attacks aren't dramatic, but they definitely work. I can make obviously irrelevant moves that change the slope (context is: fitting a straight line, using machine learning!).