2021-08-31

orbital torus imaging: next steps

Price-Whelan (Flatiron), Rix (MPIA), and I met this morning to discuss the next projects we might do with our orbital torus imaging method for measuring the force law in the Galaxy. The method has many information-theoretic advantages over classical methods like Jeans modeling, but it has one very big disadvantage: We need to be able to compute actions and angles in any of the gravitational potentials we consider, and that means integrable, and that means restricted potential families. We'd like to be doing something way more non-parametric. Rix is proposing (and we are accepting) that we do OTI in a restricted family, but then dice the data by position in the Milky Way, and see if the results are interpretable in terms of a more general mass distribution.

2021-08-30

geometry of density derivatives

Gaby Contardo (Flatiron) and I discussed our project on voids and gaps in data. We have to write! That means that we have to put into words things we have been doing in code. One of our realizations is that there is a very useful statistic, which is the largest eigenvalue of the second derivative tensor of the density (estimate), projected onto the subspace orthogonal to the gradient (the first derivative). It's obvious that this is a good idea. But it isn't obvious why or how to explain to others why this is a good idea. Contardo assigned me this as homework this week.

2021-08-20

what is gauge symmetry?

After we posted this paper on gauge-equivariant methods for machine learning, we got referee comments that maybe what we are doing isn't gauge. So we spent a lot of time working on gauge! What we are doing can be gauge, but there are additional things to add to make that clear. We are clarifying, but maybe for a next contribution in which we really do full gauge invariance, and things like parallel transport.

2021-08-19

finishing the response to referee and adjusting the paper.

As is usual with Publications of the Astronomical Society of the Pacific (great journal!), Soledad Villar and I got a constructive and useful referee report on our fitting paper. We finished our comments and adjustments to the paper today. The referee made an excellent point, which is: Since there are fast Gaussian process codes out there, why ever do interpolation or flexible fitting any other way? Good question! We answered it in the new revision (because sometimes fast GPs don't exist, and sometimes you don't want a stationary process and sometimes you are in a weird geometry or space), which we will update on arXiv soon.

2021-08-09

what are the assumptions underlying EPRV?

I reopened an old paper started (many years ago now) by Megan Bedell (Flatiron) and me, about the precision possible in extreme precision radial-velocity spectroscopy. Most of the results in the literature on how precisely you can measure a radial velocity (the information-theoretic question) depend on a very large number of assumptions, which we try to enumerate. The thing we'd like to do with this paper (and frankly, it will take many more after it) is to weaken or break those assumptions and see what gives. I have an intuition that if we understand all of that information theory, it will help us with observation planning and data analysis.

2021-08-06

preparing presentations

Adrian Price-Whelan (Flatiron), Emily Cunningham (Flatiron), and I met with Micah Oeur (Merced) and Juan Guerra (Yale) to go through their ideas about how to present their results at the wrap-up of the summer school on dynamics next week. We discussed lots of good ideas about how to do short presentations, how to react to other presentations in the same session, how to read the audience, and so on. Someone should write a book about this!

2021-08-05

every Keck/DEIMOS star, ever!

I met briefly this morning with Marla Geha (Yale). She is completing an impressive project, in which she has re-reduced, from raw data, every (or nearly every) Keck/DEIMOS spectrum of a star in the Milky Way or Local Group. These include ultra-faint dwarfs, classical dwarfs, globular clusters, halo, disk, and so on. It is an absolute goldmine of science. We spent time talking about the technicals, since she has done a lot of creative and statistically righteous things in this project (which is built on the exciting new open-source project PypeIt). But we also dreamed about a lot of science that we could be doing with these data. It will be of order 105 stars.

2021-08-03

predicting wavelength calibration from housekeeping data

I did some of my favorite thing today, which is fitting flexible models. The context was my attempt to predict the wavelength solution for the SDSS-IV BOSS spectrographs using only the housekeeping data, like the state of the telescope, temperatures, and so on. It doesn't work accurately enough, or at least not with the housekeeping data I've tried so far. It looks like there might be a hysteresis or a clank or something like that. If this is right, it bodes poorly for reducing the number of arcs we need to take in SDSS-V, which is supposed to move fast and not break things.

But all that said, I still have one card left to play, which is to see if we can look at sky lines in science frames and learn enough from sky lines—plus the historical behavior of all arcs ever taken—such that we can lock everything else down without taking an arc for every visit.

2021-08-02

scalings for different methods of inference

Excellent long check-in meeting with Micah Oeur (Merced) and Juan Guerra (Yale) about their summer-school projects with me and Adrian Price-Whelan (Flatiron). The projects are all performing inferences on toy data sets, where we have fake observations of a very simple dynamical system and we try to infer the parameters of that dynamical system. We are using virial theorem, Jeans modeling, Schwarzschild modeling, full forward modeling of the kinematics, and orbital torus imaging. We have results for many of these methods already (go team!) and more to come. Today we discussed the problem of measuring scalings of the inferences (the uncertainties, say) as a function of the number of observations and the quality of the data. Do they all scale the same? We also want to check sensitivity to selection effects, and wrongnesses of the assumptions.