NASA funding white paper

[I have been off the grid doing important things. Apologies to my loyal reader.]

I spoke to Megan Bedell (Flatiron) about representing Flatiron next week at the Terra Hunting Experiment collaboration meeting in Cambridge UK next week. I think we see eye-to-eye on all things. In general, we at Flatiron are for transparency in operations, openness, data and code releases, and building legacy value.

I spent most of my research time responding (at the very last minute) to this NASA Request For Information. I wrote in response a white paper about supporting methods and software for development of blue-sky methods that might enable qualitatively new missions and capabilities. My (very hastily written; my apologies!) white paper is here. Comments greatly appreciated (even though it is too late for the RFI itself).


spiral density perturbation

I had a call today with Eilers (MIT) and Rix (MPIA) about making self-consistent perturbations to a simple galaxy disk model and interpreting kinematic signatures therein. We have been doing this for months but there are still some conceptual issues that are difficult. Like can we really confine our perturbation to the disk plane, or do we have to give it non-trivial three-dimensional structure. We don't all agree, but we are getting closer. It turns out: Dynamics is hard!


calibration of spectrographs

I was barely present at work today! Things going on. But the first cohort of pre-doctoral fellows at Flatiron completed this week, and they gave an amazing set of talks, which spanned a huge range of science. What a pleasure. My (biased) favorite is Lily Zhao (Yale), who has potentially revolutionized how spectrographs will be calibrated. As my loyal reader knows!


stellar survey

Bedell (Flatiron) and I were on the Terra Hunting Experiment science call where we discussed the idea that the whole extreme precision radial-velocity (EPRV) community might collaborate on doing some big target-selection surveys of relevant bright stars. Different surveys will want to make different choices, but we all want the same kinds of input data to make those choices. So maybe we should just band together and observe the heck out of the possible targets (bright main-sequence stars)? If you are in the community and you want in, send us email!


talking about the future of NASA funding

My only research today was a conversation with Dustin Lang (Perimeter) NASA funding programs. I am thinking about responding to This call for information.



Nothing for two days. For reasons that are out of scope here.



It was a low-research day (job season) but I worked a bit with Lily Zhao (Yale) on interpolation methods and comparing interpolations. This for our hierarchical, non-parametric wavelength calibration method.


non-parametric and hierarchical

On the flight home from #AAS235, I did some writing in a paper by Lily Zhao (Yale) about spectrograph (wavelength) calibration. I'm very excited about this project; we removed all dependence on polynomials and other kinds of strict functional forms. We went non-parametric. But of course this greatly increases the degrees of freedom of the fitting or interpolation of the calibration data. So when we do this, we also have to go hierarchical; we have to restrict the calibration freedom using the data. That is, we don't have any strict functional form for the calibration of the spectrograph, but we require that the calibration solution we find lives in the space of solutions that we have seen before. That is, if you increase the freedom by going non-parametric, you need to restrict the freedom by going non-parametric. (The results look incredible.)


#AAS235, day 4 and #hackaas

Today was Hack Together Day #hackaas at #AAS235. We computed that this is the eighth winter AAS meeting to have a hack day, making it (AAS Hack Together Day) one of my scientific accomplishments of the decade. At the hack day, the main thing I did was hack on hack day, working with Jim Davenport (UW) to brainstorm things we can do to keep the event fresh, and keep us experimenting with it. I also had a great conversation with Brigitta Sipocz, Geert Barentsen, and others about ways we can use our hacking and design thinking to support a reduction in CO2 emissions by astronomers and academics in general. Related to my conversations of yesterday.

But many great things happened in the Hack Together Day. Too many to list here. Look at the wrap-up slides to get a sense of the range and depth of the projects. So many people learned a lot and did a lot. I'm proud, which is a sin, apparently.


remote meetings

A highlight of today was a long meeting with Chris Lintott (Oxford) covering many subjects. But he told me about dot-dot-astronomy, which is a fully-remote reboot they are working on for the niche but extremely influential dot-astronomy meetings. The idea is to go fully remote—all participants remote—but then change the meeting expectations and structure to respect that. The idea is: Maybe not try to do remote meetings so they are just as good as face-to-face meetings, but to try to do remote meetings so they are something very different from face-to-face meetings. That seems like a great idea. Let's re-frame our goals. We have to do something about what we are doing to this planet.


#AAS235, day 2

My personal life relented slightly and I got to Hawaii for a bit of the 235th Meeting of the American Astronomical Society. It is great to see the whole community (or a very large part of it) in one place at one time; I'm still a believer in these meetings, after all these years (I've been attending pretty regularly since 1994). Oh no, has this become “old-fogey research blog”?

Because I arrived today I only saw a few talks, one of which was my student Storey-Fisher (NYU), who explained how we can estimate the two-point correlation function without binning the data into bins. She did a good job of summarizing the benefits, which are legion: We lower the bias and the variance over the traditional methods, and we can work in function spaces that are appropriate to our science questions, for just two examples. I can't wait to be submitting this paper.

Her talk was followed by an excellent talk by Shajib (UCLA) about gravitational lensing and the Hubble-Constant controversy. He showed that the lensing results are falling in line with the late-time supernova-based Hubble Constant measurements, not the CMB and BAO measurements. And his biggest systematic in his time-domain analyses is (as expected) the foreground “mass sheet” degeneracy. He is getting close to achieving one of the dreams of this field (that I have had with Phil Marshall, for example), which is to automate the fitting of non-trivial strong gravitational lens systems, including lensing galaxy, and multiple source galaxies. Beautiful stuff.

And at this meeting there was so much more, almost infinitely more!


adversarial attacks on linear models

I got some work done today on my project with Soledad Villar (NYU) to understand the differences between discriminative and generative models. I wrote code to make L2-normalized and single-pixel (or sparse) attacks on the discriminative model. Everything is linear, so these attacks aren't dramatic, but they definitely work. I can make obviously irrelevant moves that change the slope (context is: fitting a straight line, using machine learning!).


what is a measurement?

I had a great conversation with Hans-Walter Rix (MPIA) to start off the new year. As we often do, we veered into epistemology. He asked: If you can use a data-driven model to infer the Eu/Fe abundance ratio of a star, but your precision is higher than the Cramér–Rao bound or Fisher information that we calculate from any consideration of actual Eu lines (from, say, a physical model of stellar photospheres that we believe). Is then this a measurement of Eu/Fe? What if the data-driven model is in fact not directly measuring Eu but instead measuring elements that are highly covariant with Eu, so that the Eu information is nonetheless very good? This is not a theoretical question, we think this is happening in some cases with our data-driven spectroscopic models.

What if our measurements—made with things covariant with Eu, but not Eu directly—do a good job on predictive accuracy? The machine-learning community (or my stereotype of this community) would say that predictive accuracy is all there is: If you predict Eu/Fe well, then you are measuring Eu/Fe. But most (or many, or my stereotypical) astronomers would say that you aren't measuring Eu/Fe unless Eu lines are involved in the measurement, or some physically motivated derivative of the stellar spectrum with respect to Eu.

Or maybe the problem is causal: It isn't a measurement of Eu if the data aren't causally related to Eu?

I'm not sure I agree with the (cartoon) astronomers here, nor the (cartoon) machine-learners. The situation is complicated. After all, you never directly measure anything of importance in astrophysics; every measurement depends on chains of covariances and common causes. For example, when we measure the age of the Universe, we aren't really measuring the age per se, we are measuring a combination of cosmological parameters that assemble into that age. For instance, you couldn't measure the age of the Universe independently of the Hubble Constant and the mass densities. But I also agree that if Eu isn't in the spectrum, it's a bit weird to say that you can measure it.