I had a useful meeting with Lily Zhao (Yale), Megan Bedell (Flatiron), and Matt Daunt (NYU) to discuss Zhao and Daunt's various data-analysis projects in precision spectroscopy. In both cases, we spent a lot of time looking at figures (and, in Zhao's case, interactively making figures in the meeting). This is generic: We spend way more time looking at visualization of issues than we do reading the code that generates them. I think it's important too; code has to make sensible figures; reading code can lead to all sorts of confusions. And, besides, debugging follows the scientific method: You hypothesize things the code could be doing wrong, you design figures to make that would demonstrate the bug, you predict what those figures should and shouldn't show, you make the figures, and you conclude and create new hypotheses. It's funny, I currently don't think that Science (tm) follows the scientific method, but I think debugging scientific code does. Hmmm.
While sitting in a freezing-cold car (not mine!), I pair-coded with (well really watched code being written by) Adrian Price-Whelan (Flatiron) on the SDSS-IV APOGEE visit sub-frames; the idea is to get higher time-resolution radial-velocity information. In the conversation while code was being hacked, we set scope for a couple of possible papers: In one, we could show that we measure short-period binary orbital parameters more precisely (and more accurately?) with finer time-resolution measurements. In another, we could show that we can measure asteroseismic modes across the red-giant branch. We don't have either result yet, so I am just dreaming. But it's related to the point that it is sometimes hard to publish technical contributions to astronomy.
Today is the 16th birthday of this blog! Yes, this blog has been going for 16 years, and if I trust my platform, this post will be post number 3753. I had a great research day today. In Stars & Exoplanets meeting Rodrigo Luger (Flatiron) showed his nice information-theoretic results on what you can learn from stellar light curves about stellar surfaces, and Sam Grunblatt (AMNH) showed some planets that have—or should have—changing orbital periods as they inspiral into their host stars. I asked Grunblatt about the resonances that might be there, like the ones I just learned about in Saturn's rings: Are planet inspirals sensitive to asteroseismic resonances?
Before and after this meeting, Adrian Price-Whelan (Flatiron) and I continued working on measuring radial-velocities in SDSS-IV APOGEE sub-exposures. We find so many weird effects we are confused! We find sub-hour velocity trends but they seem to have the wrong slopes (accelerations) given what we know about the targets. It might have to do with badly masked bad pixels in the spectra...
Adrian Price-Whelan (Flatiron) opened up the black box of the SDSS-IV APOGEE data, looking at whether we can measure stellar radial velocities in the short (9-min) sub-exposures that make up a full (1-ish hour) APOGEE exposure. We pair-coded for much of the morning and showed that yes, yes we can! This conceivably increases the time resolution of the APOGEE data considerably, and is useful for short-period systems (and, I hope, red-giant asteroseismology). We have to figure out what's our best method for making the measurements now.
My day had a twitter (tm) component in which I asked about the correspondences between gaps in Saturns rings and low-integer-denominator resonances with Saturns moons (and, I learned, planetary seismic modes). This led me to Jason Hunt (Flatiron), who I asked about resonances in the Milky Way disk: Can the gaps in velocity space in the local disk be associated cleanly with particular resonances with the bar or spiral structure or Sagittarius? He thought yes, for some, and made some nice plots of the three orbital frequencies as a function of velocity in the local neighborhood. These are all steps towards (in my mind) figuring out the frequencies of disk perturbations more-or-less directly from the data.
As my loyal reader knows, Lily Zhao (Yale) is looking at whether spectral shape changes predict radial-velocity mistakes in extreme-precision radial-velocity projects. She finds that they do! This time in simulated spectra, in which there are cool, tiny star spots on the surface of a rotating star. This gives us (Zhao, Megan Bedell and me) hope that we can apply this to real data, if we can model the telluric interference accurately enough. My job on this project is to show that what we are doing is an approximation to doppler spectroscopic imaging.
Today I spoke with Matt Daunt (NYU) about subtleties in modeling stellar spectra as observed through a gas cell or the atmosphere: The effects of these things are multiplicative on the spectrum, but multiplicative at extremely high resolution; they aren't strictly multiplicative at low resolution (because convolution doesn't commute with multiplication!). He is close to being able to reproduces some of the results from our wobble paper.
Today Soledad Villar (JHU) and I tried to write down a zeroth-order problem statement for looking at the applicability of graph neural networks for solving cosmological large-scale-structure problems. The idea is that the universe has graph symmetries: The physics is not sensitive to the order in which we label our particles. This, the machine learners call “graph equivariance”. The universe also has many other symmetries like rotation, translation, boost and so on. These we are calling (for now) “gauge equivariances”. The mathematical language is different from the physical language, as usual!
Binning is sinning. This phrase appears in our recent paper on correlation-function estimation. Our solution to the problem of binning is very deep (imho): Not only do we obviate binning in the radial-separation direction, we also obviate binning in any other quantity on which you think the clustering might depend (like angle wrt the line of sight, galaxy luminosity, and so on).
Abby Williams (NYU), Kate Storey-Fisher (NYU), and I are using the new unbinned estimator to look for variations in galaxy clustering with position within the Hubble volume. Traditionally this might be done by splitting the space into boxels, and measuring the clustering in boxels separately; are there variations? But binning is sinning: Now Storey-Fisher has made an estimator that can estimate the parameters of a clustering model with an explicit gradient or variation with position. And Williams has made simulated cosmological volumes that contain clustering gradients for testing purposes. We're close to making a (toy) measurement!
I took a call today with Adrian Price-Whelan in which we discussed the very constructive, useful referee report we received for our orbital torus imaging paper. I don't know if it is just me, but I think refereeing in astrophysics has generally become more constructive over the years. More about improving the literature and less about gatekeeping it. The referee's most challenging comments are about testing our assumptions, and understanding how the anomalies we find in that paper trace back to violations of our assumptions. That's science right there.
It's crunch time this weekend on Dustin Lang (Perimeter) and my proposal for the NASA Open-Source Tools, Frameworks, and Libraries call. I spent a lot of quality time this weekend cranking out words. After doing some literature review, we find that Astrometry.net is used in a huge number of projects, from NASA missions to cosmic-ray detectors to (of course) amateur astrophotography workflows. That's exciting, and relevant to our proposal. One of the great things about the NASA call is that it requires us to think about project management, community building, and collaboration policies. That is good; it will help our project immensely.
Lily Zhao (Yale), Megan Bedell (Flatiron), and I looked at Zhao's results trying to learn radial-velocity displacements from spectral shape changes for EPRV data. She finds that it works, which got me extremely excited! However, there is some suggestion in her results that the method might be making use of unmodeled (unmasked) telluric lines of very low amplitude. This is not permitted, because we want to just use changes in the star's intrinsic shape. We came to the realization in the call that this gets into questions of causal inference: When the signals are small or noisy, how do we know that we are learning from signals caused by the star itself, rather than the atmosphere or instrument? We decided to move to simulated spectra for a bit to look at these questions in a playground where we know the right answers.
I froze in my mother-in-law's car (NYC alternate-side parking FTW) while I spoke with Teresa Huang (JHU) and Soledad Villar (JHU) about our old project to find adversarial attacks against machine-learning methods used in astronomy. One of the big problems we face is that our methods require good derivatives of output with respect to input (or vice versa) for the methods we are studying. However, it is often hard to get these derivatives precisely. Even when a method has analytic Jacobian or derivative operators (like tensorflow and jax deliver), it isn't always exactly useful, because sometimes the methods are doing stochastic things like dropout and ensembles when they make predictions. Our conclusion was that maybe we need to be reimplementing all methods ourselves, maybe in straw-person forms. That's bad. But also good?
Today I took a serious shot at getting words down in my upcoming NASA proposal for open-source tools, frameworks, and libraries. This is a new call to support development and maintainance of open-source projects that are aligned with NASA science missions (yay open science and NASA!). Dustin Lang (Perimeter) and I are proposing to support Astrometry.net, which is used in multiple NASA missions, including SOFIA and SPHEREx. It is hard to put together a full proposal; writing a proposal is comparable in intellectual scope to writing a scientific paper! And it must be done on deadline, or not at all.
Independently, Kathryn Johnston (Columbia) and David Spergel (Flatiron) have pointed out to me that if you have a Hamiltonian dynamical system that is slightly out of steady-state, you can do a kind of expansion, in which the steady-state equation is just the zeroth order term in an expansion. The first-order term looks like the zeroth-order Hamiltonian acting on the first-order perturbation to the distribution function, plus the first-order perturbation to the Hamiltonian acting on the zeroth-order distribution function (equals a time derivative of the distribution function). That's cool!
Now couple that idea with the fact that a steady-state Hamiltonian system is a set of phase-mixed orbits nested in phase space (literally a 3-torus foliation of 6-space). Isn't this first-order equation the equation of a winding-up spiral mode? I think it is! If so, it might unite a bunch of phenomenology, from cold stellar streams to spiral structure in the disk to The Snail. I discussed all this with Adrian Price-Whelan (Flatiron).
I had a wide-ranging conversation today with Rob Simcoe (MIT) about connections between my group in New York and his group in Cambridge MA. He does complex hardware. We do principled software. These two things depend on each other, or should! And yet few instruments are designed with software fully in mind, in the sense of making good, non-trivial trades between hardware costs and software costs. And also few software systems are built with deep knowledge of the hardware that produces the input data. So there are synergies possible here.
Today was the first meeting of the Gaia Unlimited project (PI: Anthony Brown), in which we attempt to make a selection function (and the tools for making many different kinds of selection functions) for investigators making use of the ESA Gaia data to perform population-level inferences. Among the many things we discussed were the definition of the selection function (which is not trivial, given the historical usage of the term (and it appears in 700 refereed publications in 2020, according to NASA ADS), and what's known about the Gaia selection function already. The latter includes amazing work by Boubert and Everall in which they have tried to reverse engineer everything to determine the selection function to very faint levels, given the Gaia scan patterns, telemetry limits, and dropped fields. So far, my role in this project is on the conceptual side, around definitions, terminology, and use cases. Along those lines there was great discussion about what the selection function is, and what it is not. Our position is that it is the probability, given hypothetical properties q, that a counter-factual source with those properties would enter the catalog. Even that definition is not quite complete, because there are details relating to the observability of—and noise in—the properties q. More about this throughout this year!
I spoke today with Matt Daunt (NYU), who is re-writing the wobble concept in Jax. That's a great project! We discussed how to structure the code so it is easy to use now and easy to extend. We have various extensions in mind, both conceptually (like using many stars simultaneously to improve our telluric models) and technically (like adapting to gas-cell spectrographs).
Stars and Exoplanets meeting at Flatiron (well on xoom, really) was all about finding stellar streams. Matt Buckley (Rutgers) talked about repurposing to the astrophysical domain machine-learning methods employed in high-energy physics experimental data to find anomalies. Sarah Pearson (Flatiron) talked about building things that evolve from Hough transforms. In both cases we (the audience) argued that the projects should make catalogs of potential streams with low (non-conservative) thresholds: After all, it is better to find low-mass streams plus some junk than it is to miss them: Every stream is potentially uniquely valuable.
I spent time today working through comments from Kate Storey-Fisher (NYU) on the document that Soledad Villar (JHU) and I have written about fitting flexible models. I made those changes, while Soledad put in some proofs of some of the key math points. We are so close to being done! But I don't mind being slowed down by amazingly constructive and useful comments from my students!
In a long conversation, Kate Storey-Fisher (NYU) and I worked through her new and nearly complete derivation of our continuous-function estimator for the correlation function (for large-scale structure). We constructed the estimator heuristically, and demonstrated its correctness somewhat indirectly, so we didn't have a good mathematical derivation per se in the paper. Now we do!
Hans-Walter Rix (MPIA) and I had a solid conversation today about the scope of our first paper on the selection function. We want to be pedagogical in scope and content. So the argument between us is: How sophisticated to get in our selection-function model? Rix is arguing for a less sophisticated case, keeping the story and main point simple, and I am arguing for something more sophisticated, that more connects to the real decisions that people are making every day. And all this relates to exactly what toy problems we show. We came to something of a compromise position, in which we give an example where the apparent magnitude cut is the main selection, but then show what happens when you expand the sample such that other effects beyond the pure apparent magnitude cut start to affect the sample significantly. One of our points will be that as particular selection effects get fractionally smaller in impact on your sample, you don't have to model them as precisely to meet some global accuracy goals for your model of the whole population.