Today was the last day and wrap-up for the 2019 SB Gaia Sprint. It was quite a week! A few highlights from the wrap-up (for me, very subjective, not fair or complete) were: Schwab Abrahams (Berkeley) showed that stars which are flagged in certain ways in the Gaia data are reliably variable stars, by looking at TESS light curves. Coronado (MPIA) showed that stars with small orbital-action differences tend to also have small element-abundance differences. Brown (Leiden) and others worked on making “Gold” samples in Gaia data that make it easy for people to look at or follow up spectroscopically. Mateu (UdelaR) improved her catalog of, and meta-data on, stellar streams in the halo. El Badry (Berkeley) convincingly showed us that there is an excess of very precisely equal-mass binary stars even at very large separations. Widrow (Queen's) showed first attempts at trying to perform a regression that can be used to infer the Galactic bar density from velocity fields. Hunt (Toronto) showed velocity and density maps of a simulated disk that look very much like the features that Eilers (MPIA) and I see in the data! And Laporte (UVic) showed a great movie of the data in the phase spiral (The Snail!) that shows its beautiful and informative dependence on azimuthal action (or really vertical frequency I think!). It was a great week with great people doing great things in a great location. I'm exhausted! The wrap-up slides are available here.
Each day at the Sprint, we have a check-in, in which daily results are discussed. Today Cecilia Mateu (UdelaR) showed improvements she has made to the database or list she maintains of known or reported stellar streams in the Milky Way halo. With the encouragement of Ana Bonaca (Harvard) and the help of Adrian Price-Whelan (Princeton), she made an astropy-compatible data file that delivers coordinate transformations into the stellar stream reference frames (great-circle coordinates). This will make it much, much easier for people to perform analyses on streams and compare new detections to known objects.
At lunch, a subset of the group that discussed the ESA Gaia selection function yesterday met again to discuss the possibility of putting together a large funding proposal to create what's needed. Many interesting things came up in this discussion. One is that many more projects are enabled by the selection function. So a small investment here greatly increases the impact of Gaia. Another is that we need to have a set of clearly defined example problems that illustrate the relevant issues. Another is that many of these possible example projects need not just an observational selection function but also a 3-d dust map in the Milky Way. Is that the same project or a different one? Another is that there aren't a lot of possible funding avenues that would be appropriate in both scale and international scope. It was a valuable discussion, but I don't know where we are at the end.
The highlight of the day was a long discussion of the kinematics of the Milky Way bar with Larry Widrow (Queen's) and Ortwin Gerhard (MPE) and Christina Eilers (MPIA) and Sarah Pearson (Flatiron). We almost became convinced that we are seeing the bar at the center of the Galaxy kinematically. It appears as a quadrupole in the velocity field. But if we are seeing it, we are seeing it at the wrong angle! So there is work to do. And many of the simple ideas about what we see depend on some kind of steady-state assumption, when in practice the bar evolves on a time-scale comparable to it's rotation period. More soon!
At the Gaia Sprint, there is no formal program. It is just work, work, and more work! But we do let the participants self-organize some break-out sessions that are more like sessions in a (highly interactive) workshop. Today, we ran a session on a possible ESA Gaia DR2 selection function. There is no selection function, and this seriously limits the science we can do with the mission and its data. I opened the session with some generalities about what a selection function should or could be and how we would use it, working from notes that Rix (MPIA) and I have been working on. I learned that we are describing it all wrong, and that we need much better and more worked-out example problems. It is very interesting to classify projects into those that do and those that don't need a selection function. Rix and I put it on our to-do list to re-work our paper outline on this.
In the sprinting part of the day, Eilers (MPIA) and I stepped back and realized that we should make all nine obvious kinematic plots of the Milky Way disk: Mean velocity (three plots), mean squared velocity (three plots) and mean velocity-velocity cross-correlation components (three plots). We started on that, and the bar looks like it just pops right out in the plot of the mean-square vertical velocity component! We are starting to realize that the things we want to plot that relate to the bar are very different from the things we want to plot that relate to the spiral arms.
After playing with visualization yesterday, Christina Eilers (MPIA) and I got the idea that perhaps the radial-velocity variations we see in the Milky Way disk might indicate density variations. In particular, does the radial-velocity field converge on high-density regions in the disk (spiral arms, say) and diverge on low-density regions (inter-arm gaps, say)? Sarah Pearson (Flatiron) came to our rescue with a nice visualization of the density and velocity fields, in which she could smoothly go from showing one to the other. And indeed, our intuitions were justified, at least qualitatively.
In the evening check-in, Paolo Tanga (Côte d'Azur) showed some beautiful results on the ESA Gaia coordinate systems relative to other catalogs. He calls these differences "zonal corrections" for historical reasons! I asked him how he knows which of the coordinate systems is best, and he said: In the best frame, the asteroids will travel on calculable trajectories. (I would say gravitational trajectories, but for asteroids, radiation pressure and other forces are relevant too!) So the best coordinate system will be Newtonian in the Solar System! Of course given frame dragging, and strictly speaking, Newtonian for the Solar System will not be Newtonian for the Galaxy! I asked about that and it led to some discussion with Larry Widrow (Queen's). I have much to say about all this, but I'm not yet ready to say it out loud.
Today was the first day of the 2019 Santa Barbara Gaia Sprint at KITP. My goal for the week is to write a paper with Christina Eilers (MPIA), Hans-Walter Rix (MPIA), Sarah Pearson (Flatiron) and others on non-axi-symmetries in the Milky Way disk, possibly including spiral arms and the bar. I'd like to say we made a lot of progress today! Maybe we did, but it was progress in the form of making very complex code changes to improve visualization and plotting and then deciding that they only made the figures less good. Grr.
The Sprint has no required program and very few plenary activities. However, each day of the Sprint ends with a check-in in which people show a few results. Kareem El-Badry (Berkeley) showed some incredible stuff he has been doing with Rix on wide-separation binaries (identified as co-moving stars). He shows an excess population of wide binaries with near identical (few percent level!) masses. This is not surprising in that such populations are known at small separations. But it is surprising given that none of the explanations for the small-separation equal-mass binaries work at large separations. He did a lot of work today showing that these results are real and not something spurious in the Gaia data.
At that same check-in, Kathryn Johnston (Columbia) showed a beautiful visualization of how the local phase spiral in the Milky Way disk varies with azimuthal action. For her, the azimuthal action is a proxy for a vertical frequency; her picture is that the disk was impulsed at some time in the past, and that impulse has been winding up at different frequencies on different orbits. Beautiful.
In Astronomical Data Group Meeting, Megan Bedell (Flatiron) talked about possible uses of phylogenetic methods for looking at the chemical evolution of stars in the Milky Way. That's an idea that has been tried a few times, but she has a new twist: There are methods that take explicit account of time, and there are now many stars for which we have precise ages. I'm not sure, in the end, that methods from biology will translate directly to astrophysics, but I bet the sandbox is worth digging in a little bit. This connects to my thoughts and hopes of building a data-driven model of nucleosynthesis.
Before that, in conversations (also) with Bedell, I down-selected my ideas for the NASA Exoplanets Research Program call. The stage-1 proposals are due next week, so this is about as late as I can leave it. My plan is to propose something about stellar spectral variability and the new NASA investments in extreme precision radial-velocity hardware. Watch my GitHub repos for details.
Today Megan Bedell (Flatiron) and I had a telecon with the Terra Hunting Experiment team to discuss target selection. The idea is to use existing good data to choose a small set of (40-ish) stars to study for ten years. That's ambitious, which is (of course) why I love it! But how to select these stars? Our big argument today was about magnetic activity, which has some interesting properties. One is that it generally declines with age, so maybe we could just choose the stars to be not-young? Another is that there are activity cycles, so determination of low activity now might not guarantee low activity over the next decade.
One thing this caused me to ask (inside my head, that is) was: If you know that activity varies over time with some stochastic time scales, and if you need to be observing only low-activity stars, what does this imply for an adaptive observing program? That's a very nice question in experimental design. I smell the multi-armed bandit coming around the corner.
The highlight of my research day was a long conversation with Adam J. Wheeler (Columbia) about error propagation in The Cannon. He and I discussed various ways to propagate uncertainties, which come jointly from the noisy spectra and the noisy labels that are used in the training step. There are more and more brutal approximations. We had this discussion in the context of a graphical model, which Adam (completely independently of me) had drawn just like mine. I ended up proposing that he take a jackknife approach. However, it might be possible to go fully Bayesian, something I didn't think was possible a year or two ago.
We also discussed my crazy idea to build a fully non-parametric but always locally linear version of The Cannon. This would have great properties, especially as regards noise propagation and inference.
Okay, another amazing thing about the day: In Stars and Exoplanets Meeting at Flatiron, John Brewer (Yale) showed us some brand-new data from the EXPRES spectrograph for making extreme-precision radial-velocity measurements. He showed two stars that look like they are showing empirical scatter (away from a Kepler curve) of roughly 0.4 m/s. That would be ground-breaking precision and an incredibly good start for this important new instrument. Now I have to find a way to worm my way onto that team...?
Early in the morning I spoke with Ana Bonaca (Harvard) about the amazing velocity data she has taken for stars in the GD-1 stellar stream in the Milky Way halo. As my loyal reader knows, this stream has a spur of stars off the main branch that are consistent with being perturbed away by a massive perturber that flew by. Now she has precise velocity information about stars in the main body of the stream and in the spur. Contrary to our naive predictions, the stream and spur have very similar velocities. But the spur appears to be far lower in velocity dispersion. Is this real? And is this what we expect? We didn't predict it in our theoretical paper on the subject, but then again we didn't look! I can see some arguments that it might be true. Bonaca also sees many other things in the data, like that the GD-1 stream membership is improved dramatically when we have metallicity information.
Today was the first day of a Likelihood-Free Inference workshop at Flatiron, run by Foreman-Mackey (Flatiron) and others. The day started off with an absolutely beautiful introduction by Kyle Cranmer (NYU) about many methods for likelihood-free inference. He started with conceptual matters, and some beautiful examples from intro physics and also from the Large Hadron Collider (where he has been a leader in doing sophisticated inferences). And then he went on a whirlwind tour of methods and ideas.
But my two big take-aways were the following (and these two things aren't even slightly comprehensive or fair to Cranmer's deep and wide presentation): One is that he gave a great statement of the general problem of LFI, where there are, in addition to the data, parameters, nuisance parameters, and per-datum latent variables. He pointed out that even if you are a frequentist you can (in principle) integrate out the latents, because your model puts a distribution (generally) over the per-datum latents. (That's an important point, which I should emphasize in my data-analysis class.) And of course the idea of LFI is that you can't actually compute this integrated likelihood (probability of the data given parameters and nuisances, integrating out latents) in practice. You can only produce joint samples of the data and the latents. So though you are permitted to integrate out the latents, you aren't capable of integrating them out (because, like in cosmology, say, your model is a baroque and expensive simulation).
The other take-away was an incredible idea, which I hadn't learned before (maybe I should read the literature!), which is that sometimes you can set things up (using discriminators—like classifiers—oddly) such that you can compute or approximate the likelihood ratio between two models, even if you can't compute the likelihood of either one. Cranmer said two interesting things about this: One is that if you have a scalar function of the data (like a classification score from a classifier) that is monotonically related to the likelihood ratio, there are ways to calibrate it into a likelihood ratio. The other is that if you need to compute something (the likelihood ratio in this case) you don't necessarily need to compute it by computing something far far harder to compute (the two individual likelihoods in this case); he attributed this sentiment to Vapnik. You can do a lot of inference just with likelihood ratios; you rarely need true likelihoods, so this idea has legs.
The Milky Way Mapper meeting continued today. Both yesterday and today there were great presentations on asteroseismology in NASA TESS that might impact our target selection. The Hekker group here in Goettingen is doing a number of relevant things, including feature engineering for long-period asteroseismological inference in short time streams (which connects to things we have been thinking about for stellar rotation in TESS), and fully automated delivery of asteroseismic parameters for red giants. Short presentations on all this were given by Bell, Kuszlewicz. and Themessl (all Goettingen). I had a good discussion with all of this crew at lunch today, where they were pretty pessimistic about my ideas about getting asteroseismological parameters out of the ESA Gaia data (in some late DR).
In a coffee break, Rix (MPIA) asked me a nice homework problem about time-domain spectroscopy, inspired by things he is thinking about with Dani Maoz (TAU): If you have exactly two observations of a star, separated by time interval Δt, and these deliver (a precise) difference in radial velocity Δv, what can you conclude about the orbital parameters of that star? Assume the star is orbiting a dark companion on a circular orbit, and your measurements are so precise, the measurement uncertainty is irrelevant.
In a discussion led by Bird (Vandy) about signal-to-noise, Blanton (NYU) pointed out that the APOGEE detectors are up-the-ramp, so we can sub-frame them to a shorter exposure without making any approximations! That's incredible! It means that we could be doing time-domain astronomy with APOGEE on time-scales that are not accessible to any optical spectrograph. I got super-excited about this, and tried to convince Nidever (Montana) to get in there and make that change. He opined that it might not be trivial. However, the information is definitely there, latent. So my question is: What's the killer app for such technology? We can look at spectral variability information on essentially any time scale from seconds to hours. Woah.
Today was the first day of the Milky Way Mapper Workshop, at the Max Planck for Solar System Research in Goettingen. The meeting is about points of target selection, operations, commissioning, and planning. I am very excited about Milky Way Mapper, which is part of the SDSS-V family of projects; it will take infrared and optical spectra of millions of stars. From my perspective a few important things happened at the meeting (note the subjectivity and unfairness of this; it is not a summary):
MWM will operate in a robotic mode, with robotic fiber positioners. This permits us to observe enormous numbers of stars, but it means that our default calibration strategy of arcs and flats between exposures that we have used in SDSS through SDSS-IV will not be tenable. That's good! Because it causes us to do some commissioning work at the start where we quantitatively analyze the calibration strategy.
We discussed principles underlying target-selection in our various target categories. Hans-Walter Rix (MPIA) and I intend to write a general paper for the astrophysics community about this question, because there are some hard-won lessons from previous projects and things we and others have done wrong. I will say more about this in future blog posts as I try to write some of it up, but the extremely important underlying principle is the likelihood principle: If information comes through the likelihood function, then you have to select your targets such that, at the end of the day, you can write down a computationally tractable likelihood function for the parameters of interest. That's perhaps a Duh! point, but I'd like to point out that many of the complex, multi-stage projects (like RV surveys, or time-domain follow-up spectroscopic projects) fail to meet this requirement! More on this over the next weeks.
I learned a few crazy simple things today. One is that SDSS-IV APOGEE is taking multiple hot-star standards per plate! That means that the survey has, through its calibration work, created a huge time-domain survey of hot stars over a huge part of the sky. That's pretty important for science. And at this point, they have not been fully exploited as a scientific project. It's many thousands of stars!
Another crazy thing is that the SDSS projects have obtained enormous numbers of white-dwarf spectra, sometimes deliberately and sometimes by accident. These cover large parts of the white-dwarf sequence in ESA Gaia data, and this sequence contains lots of informative and intriguing structure. That suggests an interesting Gaia Sprint project.
NASA TESS proposals are due tomorrow! I spent most of my morning writing with Tyler Pritchard (NYU), who has written almost all of our proposal to perform image differencing and produce transient-alert light curves with TESS. I worked on the descriptions of the philosophy and characteristics of the CPM model, which delivers very good performance in TESS-like situations (think K2 and Kepler). Not everything I have written is going to survive, though, because the proposal strict page limit is 4 pages (and a 800-character abstract, which is very hard!).
Late in the day I had a good conversation with Lauren Anderson (NYU) about how inducing points can be used to lower the rank of the linear-algebra operators with Anderson. We talked it out, about how with the control points your matrix can only have a rank as large as the control-point set, and (even better) the control points can be placed in the space to create symmetries that speed computation. But I had an epiphany during the conversation, which is Duh in retrospect: The low-rank approximation is an approximation to the information tensor not the covariance matrix. (These are just inverses of each other.)
I got no research done at all today! The closest I came was a great lunch with my former student Morad Masjedi (Goldman Sachs), who never fails to provide me with an extremely interesting window into the world of finance (an industry to which some fraction of my group members go).
I spent research time on the weekend and today working on a NASA TESS proposal, led by Tyler Pritchard (NYU) in which we deliver image differences (using the CPM) and light curves (with assistance from ZTF public data). It is a big project but the proposal has a four-page limit, so the writing isn't trivial!
This morning, Kate Storey-Fisher (NYU) and I met with Alex Barnett (Flatiron) to discuss estimators for the correlation function. Barnett has discovered that the cosmological literature on the correlation function makes essentially no reference to the mathematics literature on point processes, and the point-process literature makes no reference to the Landy–Szalay estimator or anything like it! So there is work to do.
But of great interest today, Barnett has discovered that the Landy–Szalay estimator (and even the more trivial estimators) that we use in cosmology is non-trivially biased! It does not estimate the mean of the correlation function in an annular separation bin! It estimates a different integral of the correlation function. This has potentially disastrous consequences for things we have done related to the baryon acoustic feature.
At the Astronomical Data group meeting and the Dynamics group meeting I spoke a bit about the Suroor Gandhi (NYU) project to make a sandbox to look at mixing of stellar populations in phase space. As I spoke about them, I realized that we might be able to infer a lot about the Milky Way from the Antoja spiral. For one, the overall aspect ratio of the spiral tells you something about the mass density in the disk (because that aspect ratio relates a distance to a velocity, and that will make an acceleration). For another, the pitch angle as a function of radius should tell you the scale height. And so on! This mirrors things Kathryn Johnston (Columbia) has been saying at me for a while, but I am a slow learner! The nice thing is that these projects might be possible even with extremely simplistic, toy simulations; some of the arguments are very general!
Side note: These arguments are very related to the project code-named Chemical Tangents that I have been talking about for a while: They are methods for just seeing the orbit structure at first order in the data. Unlike in, say, Jeans modeling or virial estimates, where the orbit structure only appears in second-order statistics.
The most fun point in my research day today was a conversation with Suroor Gandhi (NYU) about making a sandbox for testing or developing intuitions about mixing in the disk. The idea is to put stars of different kinds (think chemical abundances or tags of some kind) localized in phase space, and see how the kinds mix in orbit space to illuminate tori (or equivalents). We started down this path because the mixing isn't totally intuitive, and it isn't clear always how long it will take.
At Stars Meeting at Flatiron, Wolfgang Kerzendorf (NYU) showed a nice demo of an idea we have been kicking around, which is to use augmented reality to visualize data in the space. It was just a demo, but it was promising! After that, Rocio Kiman (CUNY) showed her work on M-dwarf and L-dwarf age indicators and their inter-relations. She showed that flaring dwarfs tend to be larger in radius, which might be evidence of having magnetic pressure changing their structures.
In the afternoon, I discussed NASA TESS proposal ideas with Tyler Pritchard (NYU) and Maryam Modjaz (NYU), who are interested in using TESS to do supernova and explosive-transient science. I was planning on doing something with the CPM that was developed by Dun Wang (formerly NYU) for making image differences in TESS-like time-domain data. We tentatively decided to join forces, and we will properly decide tomorrow.
It was a low-research day today but Boris Leistedt (NYU) and I got in a discussion of how and why we run meetings and workshops and what amount of time to spend on that as a postdoc. We also talked about inclusivity at such meetings and the issues around that; especially bringing on-board collaborators who are suspicious about diversity-related desiderata. It's a complex set of issues, especially because it is where science meets ethics, and everyone feels judged or threatened.
This morning I met with Gus Beane (Penn) to discuss our work on a possible update of my pedagogical notes on cosmological distance measures. It needs to be updated because the world models in that note are so out of date! (See, for example, footnote 1 on page 2.) And the discussion about what's important doesn't map well onto what's important in today's cosmological context, where the dark energy has internal complexity. We discussed the scope or form for an update and haven't decided whether it is a total re-write, a revision, or an appendix. But whatever we decide, the first order of business is to find out what parameterizations are currently in use for the dark energy, and update all the equations, and figure out which of the analytic results survive the update (very few will, I think). Of course it's hard to predict where things are going in the future, so I don't know what we should really concentrate on.
In the Astronomical Data Group Meeting at Flatiron, Rodrigo Luger (Flatiron) showed his work (in collaboration with a few others) that is heading towards using the earth-shine scattered light in the NASA TESS focal plane to reconstruct the continents and cloud cover on the rotating Earth. This project is something of a joke, but it puts to the test some ideas that are important for the future of mapping the surfaces of directly imaged exoplanets. We discussed the approximations that Luger and team are making for tractability of the inference. My position is that they should make brutal approximations and go easy on themselves mathematically!
In that same meeting, Kate Storey-Fisher (NYU) showed that she can, with her new method for estimating the correlation function, reproduce the SDSS LRG measurements of the Baryon Acoustic Feature, where it was first discovered (by me, among many others)! Her code works and now to show that we can make the measurement with far less effective model complexity and far less dependence on simulations to get uncertainty estimates. She has her killer app working and we are now ready to write a paper.