2019-06-13

SDSS-V review, day 2

Today was day two of the SDSS-V Multi-object spectroscopy review. We heard about the spectrographs (APOGEE and BOSS), the full software stack, observatory staffing, and had an extremely good discussion of project management and systems engineering. On this latter point, we discussed the issue that scientists in academic collaborations tend to see the burdens of documenting requirements and interfaces as interfering with their work. Project management sees these things as helping get the work done, and on time and on budget. We discussed some of the ways we might get more of the project—and more of the scientific community—to see the systems-engineering point of view.

The panel spent much of the day working on our report and giving feedback to the team. I am so honored to be a (peripheral) part of this project. It is an incredible set of sub-projects and sub-systems being put together by a dream team of excellent people. And the excellence of the people cuts across all levels of seniority and all backgrounds. My day ended with conversations about how we can word our toughest recommendations so that they will constructively help the project.

One theme of the day is education: We are educators, working on a big project. Part of what we are doing is helping our people to learn, and helping the whole community to learn. And that learning is not just about astronomy. It is about hardware, engineering, documentation, management, and (gasp) project reviewing. That's an interesting lens through which to see all this stuff. I love my job!

2019-06-12

SDSS-V review, day 1

Today was day one of a review of the SDSS-V Multi-object spectroscopy systems. This is not all of SDSS-V but it is a majority part. It includes the Milky Way Mapper and Black-Hole Mapper projects, two spectrographs (APOGEE and BOSS), two observatories (Apache Point and Las Campanas), and a robotic fiber-positioner system. Plus boatloads of software and operations challenges. I agreed to chair the review, so my job is to lead the writing of a report after we hear two days of detailed presentations on project sub-systems.

One of the reasons I love work like this is that I learn so much. And I love engineering. And indeed a lot of the interesting (to me) discussion today was about engineering requirements, documentation, and project design. These are not things we are traditionally taught as part of astronomy, but they are really important to all of the data we get and use. One of the things we discussed is that our telescopes have fixed focal planes and our spectrographs have fixed capacities, so it is important that the science requirements both flow down from important scientific objectives, and flow down to an achievable, schedulable operation, within budget.

There is too much to say in one blog post! But one thing that came up is fundraising: Why would an institution join the SDSS-V project when they know that we are paragons of open science and that, therefore, we will release all of our data and code publicly as we proceed? My answer is influence: The SDSS family of projects has been very good at adapting to the scientific interests of its members and collaborators, and especially weighting those adaptations in proportion to the amount that people are willing to do work. And the project has spare fibers and spare target-of-opportunity capacity! So you get a lot by buying into this project.

Related to this: This project is going to solve a set of problems in how we do massively multiplexed heterogeneous spectroscopic follow-up in a set of mixed time-domain and static target categories. These problems have not been solved previously!

2019-06-11

words on a plane

I spent time today on an airplane, writing in the papers I am working on with Jessica Birky (UCSD) and Megan Bedell (Flatiron). And I read documents in preparation for the review of the SDSS-V Project that I am leading over the next two days in a Denver airport hotel.

2019-06-10

spiral structure

This morning on my weekly call with Eilers (MPIA) we discussed the new scope of a paper about spiral and bar structure in the Milky Way disk. Back at the Gaia Sprint, we thought we had a big result: We thought we would be able to infer the locations of the spiral-arm over-densities from the velocity field. But it turned out that our simple picture was wrong (and in retrospect, it is obvious that it was). But Eilers has made beautiful visualizations of disk simulations by Tobias Buck (AIP), who shows very similar velocity structure and for which we know the truth about the density structure. These visualizations say that there are relationships between the velocity structure and the density structure, but that it evolves. We tried to write a sensible scope for the paper in this new context. There is still good science to do, because the structure we see is novel and spans much of the disk.

2019-06-06

information theory and noise

In my small amount of true research time today, I wrote an abstract for the information-theory (or is it data-analysis?) paper that Bedell and I are writing about extreme-precision radial-velocity spectroscopy. The question is: What is the best precision you can achieve, and what data-analysis methods saturate the bound? The answer depends, of course, on the kinds of noise you have in your data! Oh, and what counts as noise.

2019-06-05

alphas, robots, u-band

In an absolutely excellent Stars and Exoplanets Meeting, Rodrigo Luger (Flatiron) had everyone in the room (and that's more than 30 people) say what they plan to get done this summer!

Following that, Melissa Ness (Columbia) talked about the different alpha elements and alpha enhancement: Are all alpha elements enhanced the same way? Apparently models of type-Ia supernovae say that different alpha elements should form in different parts of the supernova, so it is worth looking to see if there are abundance differences in different alphas. The generic expectation is that there should be a trend with Z. She has some promising results from APOGEE spectra.

Mike Blanton (NYU) talked about how we figure out how to perform a set of multi-epoch, multi-fiber spectroscopic surveys in SDSS-V. He has a product called Robostrategy which tries to figure out whether a set of targets (with various requirements on signal-to-noise and repeat visits and cadence and so on) is possible to observe with the two observatories we have, in a realistic set of exposures. That's a really non-trivial problem! And yet it appears that Blanton may have working code. I'm impressed, because integer programming is hard.

And Shuang Liang (Stony Brook) showed us that it is possible to calibrate u-band observations using the main-sequence turn-off, as long as you account for the differences between the disk and the halo. He has developed empirical approaches, and he has good evidence that his calibration based on the MSTO is better than other more traditional methods!

2019-06-04

titles, then abstracts, then figures

I had conversations today with Megan Bedell (Flatiron) and Kate Storey-Fisher (NYU) about titles for their respective papers. I am slowly developing a whole theory of writing papers, which I wish I had thought about more when I was earlier in my career. I made many mistakes! My view is that the most important thing about a paper is the title. Which is not to say that you should choose a cutesy title. But it is to say that you should make sure the person scanning a listing of papers can estimate very accurately what your paper is about.

I then think the next most important thing is the abstract. Write it early, write it often. Don't wait until the paper is done to write the abstract! The abstract sets the scope. If you have too much to put into one abstract, split your paper in two. If you don't have enough, your paper needs more content. And unless you are very confident that there is a better way, obey the general principles (not necessarily the exact form) underlying the A&A structure of context, aims, methods, results.

Then the next most important thing is (usually) the figures and captions. My model reader looks at the title. If it's interesting, the reader looks at the abstract. If that's interesting, they look at the figures. If all that is interesting, maybe they will read the paper. Since we want our papers to be read, and we want to respect the time of our busy colleagues, we should make sure the title, abstract, and figures-plus-captions are well written, accurate, unambiguous, interesting, and useful.

So I spent time today working on titles.

2019-06-03

free energy and life

One amusing conversation today was between Ben Pope (NYU) and myself about whether hot stars are more or less likely to host planets with live. We believe (it's not extremely well established yet) that there are more habitable planets around M-type stars than G-type (and there is probably a relatively smooth function of temperature). So why do we live around a G star? Is it because there is more free energy per photon? I have assumed that this is why. But we realized that we can make this argument quantitative. One question that I have is this: Is this argument anthropic? Or is it just the simple observation that Earth hosts life? I think it is anthropic, because it has something to do with whether our place is special.

2019-06-02

responding to referee

I spent the weekend finally finishing a response to referee on my paper on using spectroscopy and photometry to get precise stellar distances. It was a very constructive, helpful, and positive report, so it really is embarrassing that it took me this long to finish. But it's done and we will resubmit on Monday. Somehow in my dotage, it gets hard to do the things I must do for myself. I am motivated to meet my obligations to others, but it is hard to meet those that help primarily me.

2019-05-30

multi-messenger events and software

Today was a one-day workshop on multi-messenger astrophysics to follow yesterday’s one-day workshop on physics and machine learning. There were interesting talks and discussions all day, and I learned a lot about facilities, operations, and plans. Two little things that stood out for me were the following:

Collin Capano (Hannover) spoke about his work on detecting sub-threshold events in LIGO using coincidences with other facilities, but especially NASA Fermi. He made some nice Bayesian points about how, at fixed signal-to-noise, the maximum possible confidence in such coincidences grows with the specificity and detail (predictive power) of the event models. This has important consequences for things we have been discussing at NYU in our time-domain meeting. But Capano also implicitly made a strong argument that projects cannot simply release catalogs or event streams: By definition the sub-threshold events require the combination of probabilistic information from multiple projects. For example, in his own projects, he had to re-process the Fermi GBM photon stream. Those considerations—about needing access to full likelihood information—has very important implications for all new projects and especially LSST.

Daniela Huppenkothen (UW) put into her talk on software systems some comments about ethics: Is there a role for teaching and training astronomers in the ethical aspects of software or machine learning? She gave the answer “yes”, focusing on the educational point that we are launching our people into diverse roles in science and technology. I spoke to her momentarily after her talk about telescope scheduling and operations: Since telescope time is a public good and zero-sum, we are compelled to use it wisely, and efficiently, and transparently, and (sometimes) even explainably. And with good legacy value. That’s a place where we need to develop ethical systems, at least sometimes. All that said, I don’t think much thought has gone into the ethical aspects of experimental design in astronomy.

2019-05-29

#PhysML19 and likelihoods, dammit

Today was a one-day workshop Physics in Machine Learning. Yes, you read that right! The idea was in part to get people to draw out how physics and physical applications have been changing or influencing machine-learning methods. And it is the first in a pair of one-day workshops today and tomorrow run by Josh Bloom (Berkeley). There were many great talks and I learned a lot. Here are two solipsistically chosen highlights:

Josh Batson (Chan Zuckerberg Biohub) Gave an absolutely great talk. It gave me an epiphany! He started by pointing out that machine-learning methods often fit the mean behavior of the data but not the noise. That's magic, since we haven't said what part is the noise! He then went on to talk about projects noise2noise and noise2self in which the training labels are noisy: In the first case the labels are other noisy instances of the same data, and in the second the labels are the same data! That's a bit crazy. But he showed (and it's obvious when you think about it) that de-noising can work without any external information about what a non-noisy datum would look like. We do that all the time with median filters and the like! But he gave a beautiful mathematical description of the conditions under which this is possible (they are relatively easy to meet; it involves the noise being conditionally independent of the signal, as in causal-inference contexts). The stuff he talked about is potentially very relevant to The Cannon (it probably explains why we are able to de-noise the training labels) and to EPRV (if we think of the stellar variability as a kind of noise).

Francois Lanusse (Berkeley) and Soledad Villar (NYU) gave talks about using deep generative models to perform inferences. In the Lanusse talk, he discussed the problem that we astronomers call deblending of overlapping galaxy images in cosmology survey data (like LSST). Here the generative model is constructed because (despite 150 years of trying) we don't have good models for what galaxies actually look like, or certainly not any likelihood function in image space! He showed a great generative model and some nice results; it is very promising. I was pleased that he strongly made the point that GANs and VAEs don't use proper likelihood functions when they are trained, and those problems might lead to serious problems when you use them for inference. In particular, GANs are very dangerous because of what is called “mode collapse”—the problem that you can generate only part of the data space and still do well under the GAN objective. That could strongly bias deblending, because it would put vanishing probability in real parts of the space. So he deprecated those methods and recommended methods (like normalizing flows) that have proper likelihood formulations. That's an important, subtle, and deep point.

After Lanusse's talk, I came to see him about the point that if LSST implements any deblender of this type (fit a model, deliver galaxies as posterior results from that model), the LSST Catalog will be unusable for precise measurements! The reason is technical: A catalog must deliver likelihood information not posterior information if the catalog is to be used in down-stream analyses. This is related to a million things that have appeared on this blog (okay not a million) and in particular to the work of Alex Malz (NYU): Projects must output likelihood-based measurements, likelihoods, and likelihood functions to be useful. I can't say this strongly enough. And I am infinitely pleased that ESA Gaia has done the right thing.

2019-05-28

snail models; platykurtic galaxies

Suroor Gandhi (NYU) made me in real time some really nice plots today of what a swarm of stars in phase space do over time. Her plots are for the vertical dynamics of the disk. The very exciting thing is that she can reproduce the qualitative properties of The Snail (the phase-space spiral in the local Milky Way disk). Now we have to look at dependence on initial conditions, time of evolution, and potential parameters.

Dustin Lang (Perimeter) and I spoke for a bit about our old project modeling simple galaxy profiles with mixtures of concentric Gaussians. He is trying to build a truly continuous, interpolate-able model for all Sersic indices, and of course (well it wasn't obvious to me before today) at Sersic index of 0.5, the galaxy profile is exactly a Gaussian, so a mixture of Gaussians makes no sense. And then at indices less than 0.5, the distribution becomes platykurtic, so it can't be fit with a concentric mixture of positive Gaussians. What to do there. Lang wants to add in negative Gaussians! I want to say “don't go there”.

2019-05-27

proposal writing

I spent the long weekend, including today, working on my proposal (with Bedell) for the NASA XRP (exoplanets) competition called something like “Extreme-precision radial-velocity in the presence of intrinsic stellar variability”. We propose to do things around modeling the spectra in the joint domain of time and wavelength to improve radial-velocity precision. But we also have an information-theoretic or conceptual part of the proposal. Does NASA support conceptual work? We hope so!

2019-05-24

LIGO housekeeping data, MCMC

Yesterday I gave a talk about data science at Oregon Physics. Today I talked about dark matter—on the chalk board. I talked about various bits of vapor-ware that we are doing with ESA Gaia and streams and The Snail. I also talked about the GD-1 perturbation found by Bonaca and Price-Whelan. That was followed by an excellent and fun lunch with the graduate students, in which they interviewed me about my career and science. Oregon Physics has a great PhD cohort.

In the afternoon, Ben Farr (Oregon) and I hived off to a rural brewery to discuss LIGO systematics and MCMC sampling. I have fantasies about calibrating the LIGO strain data using the enormous numbers of housekeeping channels that are recorded simultaneously with the strain. Farr was encouraging, in that he does not believe that big models of this sort have been seriously done inside the Consortium. That means there might be a role for me or for a collaboration that includes me.

On the MCMC front, we discussed a few different sampling ideas. One is a project by Farr called kombine, which is an ensemble sampler that uses the ensemble to inform an approximation to the posterior, which in turn informs the sampling. Another is vapor-ware by me called bento box which hierarchically splits your problem into a tree of disjoint problems until you get to a set of unimodal problems that are individually trivial. I realized while I was talking I could even use HMC with reflection moves to simplify the problem at the hard boundaries of the boxes in the bento box.

On the drive back to the airport, we found that we agreed on the point that no-one should ever compute a fully marginalized likelihood. That was refreshing; Farr is one of the very few Bayesians I know who get this point. It inspired me to spend my time at the airport tinkering with the paper I want to write on this subject.

2019-05-23

Oregon, falsifiability, and the LIGO project

Today was my first day of a two-day visit to Ben Farr (Oregon) and the University of Oregon. I got lots of work done during the travel phases of the day, because I have a NASA proposal due while I'm here in Oregon! Nothing like a deadline.

I had a great day. Highlights included a discussion with James Schombert (Oregon) about various philosophical matters related to falsification. He explicitly brought up my paper about plausibility and science, which I had nearly forgotten! It's nice to know that people are finding it useful still. I really wrote it to get some things off my chest, things that had been troubling me since graduate school in the 1990s. In that paper I argue that we prefer theories that are both observationally reasonable and also theoretically reasonable; there isn't really such a thing as purely empirical falsification. At least not in the observational sciences.

But of course the main theme of my visit was LIGO. The lure of discussing this project with Farr is what brought me here. We postponed our ideas for new projects until tomorrow and, maybe surprisingly, spent our time talking about university-based project management! Because although LIGO is well funded to build hardware and deliver strain measurements, what is done with those to detect and characterize systems and populations is left to the science community, which is a looser collaboration, and which must raise most of its money externally. And, like with the SDSS family of projects, relies on essentially volunteer efforts from many ornery faculty. That's an interesting set of problems in organizational management, psychology, and political science!

2019-05-22

radial velocities from slit spectra

Marla Geha (Yale) made a surprise visit to Flatiron today, and bombed the weekly Stars and Exoplanets Meeting with a discussion of the challenges of measuring velocity dispersions (and hence masses, and hence dark-matter-annihilation limits) in ultra-faint dwarf galaxies in the halo of the Milky Way. As my loyal reader knows, this problem is very similar to problems we are working on at Flatiron around extreme-precision radial-velocity (EPRV) spectroscopy. Geha's problem is both harder and easier. It is easier because she only needs km/s (not cm/s) precision. It is harder because she has to use a slit spectrograph and point it at very faint stars! It is easier because she has both sky emission lines and telluric absorption lines to help calibrate. It is harder because differences in slit illumination mean that the sky lines and the telluric absorption don't agree for the wavelength calibration!

After stars meeting, the conversation continued among Geha, Bedell (Flatiron), and me. We discussed many things, including the point that the offset between tellurics and sky lines is a wavelength offset, not a radial-velocity offset. Or it is even something more sophisticated, related to the spectrograph optics. We discussed the point that her problems are fundamentally hierarchical, because some parameters are associated with a star, some with an exposure, some with a slit-mask, some with a time and so on. We also discussed how the wobbble framework that Bedell and I have built could be extended to capture these effects. It's certainly possible. Oh I nearly forgot: We also discussed masking and apodization of sky lines and telluric lines in the science spectra, and how to do that without biasing down-stream measurements. Spergel (Flatiron) pointed us to some literature that he was pleased to say is older than any of us (Spergel included).

I should say that Geha's admirable goal is to re-reduce all of the nearly 105df stellar spectra in the DEIMOS archive! Now that's my kind of project.

2019-05-21

imaging asteroseismic modes on the stellar surface

Many threads of conversation over the past weeks came together today in a set of coincidences. Conversations with Bedell (Flatiron), Pope (NYU), Luger (Flatiron), and Farr (Flatiron) ranging around stochastic processes and inferring stellar surface features from doppler imaging all overlap at stellar asteroseismic p modes: In principle, with high-resolution, high-signal-to-noise stellar spectral time series (and we have these, in hand!) we should be able not only to see p modes but also see their footprint on the stellar surface. That is, directly read ell and em off the spectral data. In addition, we ought to be able to see the associated temperature variations. This is all possible because the stars are slowly rotating, and each mode projects onto the rotating surface differently. Even cooler than all this: Because the modes are coherent for days in the stars we care about, we can build very precise matched filters to combine the data coherently from many exposures. There are many things to do here.

2019-05-20

predicting one population of transients from another

Tyler Pritchard (NYU) convenes a meeting on Mondays at NYU to discuss time-domain astrophysics. Today we had a discussion of a very simple idea: Use the rates of short GRBs that are observed and measured (using physical models from the MacFadyen group at NYU) to have certain jet–observer offset angles to infer rates for all the way-off-axis events that won't be GRB triggers but might be seen in LSST or other ground-based optical or radio surveys. Seems easy, right? It turns out it isn't trivial at all, because the extrapolation of a few well-understood events in gamma-rays, subject to gamma-ray selection effects to a full population of optical and radio sources (and then assessing those selection effects) requires quite a few additional or auxiliary assumptions. This is even more true for the bursts where we don't know redshifts. I was surprised to hear myself use the astronomy word "V-max"! But we still (as a group) feel like there must be low-hanging fruit. And this is a great application for the MacFadyen-group models, which predict brightness as a function of wavelength, time, and jet–observer angle.

2019-05-17

hierarchical probabilistic calibration

Today Lily Zhao (Yale) visualized for me some of the calibration data they have for the EXPRES spectrograph at Yale. What she showed is that the calibration does vary at very high signal-to-noise, and that the variations are systematic or smooth. That is, the instrument varies only a tiny tiny bit, but it does so very smoothly and the smooth variations are measured incredibly precisely. This suggests that it should be possible to pool data from many calibration exposures to build a better calibration model for every exposure than we could get if we treated the data all independently.

Late in the day, we drew a graphical model for the calibration, and worked through a possible structure. As my loyal reader knows, I want to go to full two-dimensional modeling of spectrographs! But we are going to start with measurements made on one-dimensional extractions. That's easier for the community to accept right now, anyways!


2019-05-16

forecasting tools; beautiful spectrograph calibration

Our five-person (Bedell, Hogg, Queloz, Winn, Zhao) exoplanet meeting continued today, with Winn (Princeton) working out the elements needed to produce a simulator for a long-term EPRV monitoring program with simple observing rules. He is interested in working out under what circumstances such a program can be informative about exoplanets in regimes that neither Kepler nor existing EPRV programs have strongly constrained, like near-Earth-masses on near-Earth-orbits around near-Sun stars. And indeed we must choose a metric or metrics for success. His list of what's needed, software-wise, is non-trivial, but we worked out that every part of it would be a publishable contribution to the literature, so it could be a great set of projects. And a very useful set of tools.

Zhao (Yale) showed me two-dimensional calibration data from the EXPRES instrument illuminated by their laser-frequency comb. It is astounding. The images are beautiful, and every single line in each image is at a perfectly known (from physics!) absolute wavelength. This might be the beginning of a very new world. The instrument is also beautifully designed so that all the slit (fiber, really, but it is a rectangular fiber) images are almost perfectly aligned with one of the CCD directions, even in all four corners of the image. Not like the spectrographs I'm used to!

2019-05-15

do we need to include the committee in our model?

Josh Winn (Princeton) and Lily Zhao (Yale) both came in to Flatiron for a couple of days today to work with Megan Bedell (Flatiron), Didier Queloz (Cambridge), and me. So we had a bit of a themed Stars and Exoplanets Meeting today at Flatiron. Winn talked about various ways to measure stellar obliquities (that is, angles between stellar-rotation angular momentum vectors and planetary system angular-momentum vectors). He has some six ways to do it! He talked about statistical differences between vsini measurements for stars with and without transiting systems.

Zhao and Queloz talked about their respective big EPRV programs to find Earth analogs in radial-velocity data. Both projects need to get much more precise measurements, and observe fewer stars (yes fewer) for longer times. That's the direction the field is going, at least where it concerns discovery space. Queloz argued that these are going to be big projects that require patience and commitment, and that it is important for new projects to control facilities, not just to apply for observing time each semester! And that's what he has with the Terra Hunting Experiment, in which Bedell, Winn, and I are also partners.

Related to all that, Zhao talked about how to make an observing program adaptive (to increase efficiency) without making it hard to understand (for statistical inferences at the end). I'm very interested in this problem! And it relates to the Queloz point, because if a time allocation committee is involved every semester, any statistical inferences about what was discovered would have to model not just the exoplanet population but also the behavior of the various TACs!

2019-05-14

normalizing flows; information theory

At lunchtime I had a great conversation with Iain Murray (Edinburgh) about two things today. One was new ideas in probabilistic machine learning, and the other was this exoplanet transit spectroscopy challenge. On the former, he got me excited about normalizing flows, that use machine learning methods (like deep learning) and a good likelihood function to build probabilistic generative models for high dimensional data. These could be useful for astronomical applications; we discussed. On the latter, we discussed how transits work and how sunspots cause trouble for them. And how the effects might be low dimensional. And thus how a good machine-learning method should be able to deal with it or capture it.

In the afternoon I spent a short session with Rodrigo Luger (Flatiron) talking about the information about a stellar surface or about an exoplanet surface encoded in a photometric light curve. The information can come from rotation, or from transits, or both, and it is different (there is more information), oddly, if there is limb darkening! We talked about the main points such a paper should make, and some details of information theory. The problem is nice in part because if you transform the stellar surface map to spherical harmonics, a bunch of the calculations lead to beautiful trigonometric forms, and the degeneracy or eigenvector structure of the information tensor becomes very clear.

2019-05-13

eclipsing binaries

I had a good conversation with with Laura Chang (Princeton) today, who is interested in doing some work in the area of binary stars. We discussed the point that many of the very challenging things people have done with the Kepler data in the study of exoplanets—exoplanet detection, completeness modeling, populations inferences— are very much easier in the study of eclipsing binary stars. And the numbers are very large: The total number of eclipsing binary systems found in the Kepler data is comparable to the total number of exoplanets found. And there are also K2 and TESS binaries! So there are a lot of neat projects to think about for constraining the short-period binary population with these data. We decided to start by figuring out what's been done already.

2019-05-08

Pheno 2019, day 3

I spent the day at Pheno 2019, where I gave a plenary about Gaia and dark matter. It was a fun day, and I learned a lot. For example, I learned that when you have a dark photon, you naturally get tiny couplings between the dark matter and the photon, as if the dark matter has a tiny charge. And there are good experiments looking for milli-charged particles. I learned that deep learning methods applied to LHC events are starting to approach information-theoretic bounds for classifying jets. That's interesting, because in the absence of a likelihood function, how do you saturate bounds? I learned that the Swampland (tm) is the set of effective field theories that can't be represented in any string theory. That's interesting: If we could show that there are many EFTs that are incompatible with string theory, then string theory has strong phenomenological content!

In the last talk of the day, Mangano (CERN) talked about the future of accelerators. He made a very interesting point, which I have kind-of known for a long time, but haven't seen articulated explicitly before: If you are doing a huge project to accomplish a huge goal (like build the LHC to find the Higgs), you need to design it such that you know you will produce lots and lots of interesting science along the way. That's an important idea, and it is a great design principle for scientific research.

2019-05-07

Snail

I spent a bit of research time today writing up my ideas about what we might do with The Snail (the local phase spiral in the vertical dynamics discovered in Gaia data) to infer the gravitational potential (or force law, or density) in the Milky Way disk. The idea is to model it as an out-of-equilibrium disturbance winding up towards equilibrium. My strong intuition (that could be wrong) is that this is going to be amazingly constraining on the gravitational dynamics. I'm hoping it will be better (both in accuracy and precision) than equilibrium methods, like virial theorem and Jeans models. I sent my hand-written notes to Hans-Walter Rix (MPIA) for comments.

2019-05-06

not much

My only research events today were conversations with Eilers, Leistedt, and Pope about short-term strategies.

2019-05-03

Dr Alex Malz!

Today it was my great pleasure to participate in the PhD defense of my student Alex Malz (NYU). His dissertation is about probabilistic models for next-generation cosmology surveys (think LSST but also Euclid and so on). He showed that it is not trivial to store, vet, or use probabilistic information coming from these surveys, using photometric-redshift outputs as a proxy: The surveys expect to produce probabilistic information about redshift for the galaxies they observe. What do you need to know about these probabilistic outputs in order to use them? It turns out that the requirements are strong and hard. A few random comments:

On the vetting point: Malz showed with an adversarial attack that the ways cosmologists were comparing photometric-redshift probability outputs across different codes were very limited: His fake code that just always returned the prior pdf did as well on almost all metrics as the best codes.

On the requirements point: Malz showed that you need to know all the input assumptions and priors on any method in order to be able to use its output, especially if its output consists of posterior information. That is, you really want likelihood information, but no methods currently output that (and many couldn't even generate it because they aren't in the form of traditional inferences).

On the storage point: Malz showed that quantiles are far better than samples for storing a pdf! The results are very strong. But the hilarious thing is that the LSST database permits up to 200 floating-point numbers for storage of the pdf, when in fact the photometric redshifts will be based on only six photometric measurements! So, just like in many other surveys that I care about, the LSST Catalog will represent a data expansion, not a data reduction. Hahaha!

It was a great talk, and in support of a great dissertation. And a great day.

2019-05-02

Dr Mandyam

Today I had the pleasure of serving on the PhD committee for Nitya Mandyam Doddamane, who defended her thesis on the measurement of star-formation rates and stellar masses in spectroscopic surveys of galaxies. She compared different stellar populations models, based on different parts of the galaxy spectral energy distributions, and galaxy environments, to make inferences about which galaxies are and aren't forming stars. She has some nice examples that use environment to break some degeneracies in interpretation. In that sense, some of what she did was a causal inference. She also looked at aperture biases, comparing fiber spectroscopy to integral-field spectroscopy from various SDSS surveys. Her results are nice, and were beautifully presented, both in the talk and in the thesis. Congratulations Dr Mandyam!

2019-05-01

Galactic archaeology

It's a long story, but we have been experimenting continuously with the rules and principles underlying the weekly Stars and Exoplanets Meeting that we run at Flatiron for the NYC astrophysics community. One of the things I say about it is that if you want a meeting to be open, supportive, easy, and community-building, it has to have a strong set of draconian rules! In our most recent set of discussions, we have been talking about theming the meetings around specific science themes. Today was our first experiment with that! Joss Bland-Hawthorn (Sydney) is in town, so we themed the meeting around Galactic Archaeology. We had five short discussions; here are some highlights:

Megan Bedell (Flatiron) showed her incredibly precise 35-element (?) abundance measurements vs stellar age for her Solar twin sample. The abundances are very closely related to the age (for this sample that is selected to have Solar [Fe/H]). Suroor Gandhi (NYU) showed her results on the dependence on dynamical (or really kinematic) actions on [Fe/H] and age for low-alpha and high-alpha stars in the local Milky Way disk. These show that the two different sequences (high and low alpha) have different origins. And Rocio Kiman (CUNY) showed her M dwarf kinematics as a function of magnetic activity that could be used to constrain a disk heating model. All three of these presentations could benefit (for interpretation) from a forward model of star formation and radial migration in the Milky Way disk, along with heating! This is related to things I have done with Neige Frankel (MPIA) but would require extensions. Simple extensions, though.

Adam Wheeler (Columbia) showed us abundances he has measured all over the Milky Way from LAMOST spectroscopy, training a version of The Cannon with GALAH abundances. It's an amazing data set, and he asked us to brainstorm ideas about what we could do with it. He seems to have features in his catalog that look similar to the midplane issues that were causing me existential angst this past August. Bland-Hawthorn said that he sees similar things in the GALAH data too.

And Bland-Hawthorn himself talked about the possibility that some future instrument could measure stellar accelerations and get the Milky Way acceleration field directly! He started by commenting on the conclusions of the Bonaca et al work on a possible dark-matter perturber acting on the GD-1 stellar stream. His remarks played very well with things Bonaca and I have been discussing around making a non-parametric acceleration map of the Milky Way.

In summary: A great experiment!

2019-04-30

validation

After lunch, Alice Shapley (UCLA) gave a great Astro Seminar about what we can learn about high-redshift galaxies with multi-band photometry and infrared spectroscopy and, soon (or we hope soon!), JWST. There are hopes of seeing a consistent story in the star-formation rates, the build-up of mass, and the metallicity evolution in the stars and the interstellar medium.

At the end of the day, Andy Casey (Monash), Soledad Villar (NYU), and I met to discuss Villar's generation of APOGEE spectra of stars with a GAN, and how we might validate that. We discussed various options, but we are more-or-less converging on the idea that the spectra have to tell consistent or sensible stories about temperature and logg. I have ways to operationalize that. But one of the funny things is that real spectra of stars don't tell consistent stories! Because the physical models aren't awesome. So we can only require that the generated spectra do no worse than the real spectra.

2019-04-29

EPRV at Yale

I spent an absolutely great and energizing day at Yale today, with the groups of Debra Fischer (Yale) and Jessie Cisewski-Kehe (Yale), who are working together to bring the best in hardware and the best in statistics to the hard problem of making (much) better than m/s-level radial-velocity measurements. We talked about many things, but highlights included:

How do you put uncertainty estimates on extracted spectral pixels? In the 2d-to-1d extraction step, the estimation of a single 1d spectral pixel is a modification of a least-square fit in the 2d image. How to put a good uncertainty on that, especially when the model isn't strictly linear least squares? We discussed Fisher-information estimates, which are best-case estimates, and also bootstrap or jackknife estimates, which are probably more conservative. The nice thing is that the EXPRES spectrograph (Debra Fischer's instrument) has many 2d pixels per 1d pixel, so these empirical methods like jackknife are possible.

What parts of the spectrum are most sensitive to activity? One approach is to find activity labels and perform a regression from spectral pixels to activity labels. Bo Ning (Yale) is taking this approach, with strong regularization to force most pixels to zero out in the regression. He finds plausible results, with the centers of certain lines contributing strongly to the regression. We discussed the kinds of tests one can do to validate the results. Ning also has evidence that the ability to find good activity indicators might be a strong function of spectral resolution, which is good for projects like EXPRES and ESPRESSO, which have very high resolution.

How can we measure radial velocities in the presence of stellar variability? We now think that stellar variability is the tall pole in EPRV. If it is, we have some deep and fundamental questions to ask here, since the whole edifice of relative RV measurement relies on the source being constant in time! We discussed different approaches to his ill-posed problem, including using only spectral information about RV that is somehow orthogonal to the spectral variability, or placing strong priors on the RV signal to separate it from the variability signal, or performing some kind of causal-inference-like regression. There is room for good theory here. Parker Holzer (Yale) is working on some theory along the orthogonality lines.

2019-04-26

not much

Today was a low-research day! But I did have a brief conversation with Anu Raghunathan (NYU) about making her box-least-squares code more pythonic and more modular. And I did work a bit on the abstract and typographic macros in my M-type dwarf paper with Jessica Birky (UCSD).

2019-04-24

let's just see the orbits directly!

My main research today was a long call with Suroor Gandhi (NYU) about papers that determine the dark-matter density in the local part of the Milky Way disk by modeling the stars as an equilibrium population. The idea is that if the population is in equilibrium, it has some properties (like obeying the Jeans equation) that permit it to be used to do inference of the potential. I don't love these papers, both because of the assumptions they make (reaching equilibrium takes a very long time) and because of the ways that they boil down the data to some summary statistics before doing inference. Can't we generate the data and write down a proper likelihood? But more importantly, can't we do inference on non-equilibrium problems? I think we can! That's why Gandhi and I are looking at The Snail (the phase spiral). I think it reveals the orbit structure of the disk pretty-much directly, and we ought to be able to beat the equilibrium models both in precision (measuring orbits is better than measuring velocity moments) and in parsimony (we don't have to make assumptions that are as strong).

2019-04-23

how to validate a generative model?

Soledad Villar (NYU) has created a deep-ish network that can generate fake APOGEE spectra. They look convincing! Now her question to me today was: How do we validate a generative model? How do we know that these generated spectra are reasonable or sensible? In astrophysics we have no ground truth. All we can think of so far is looking at whether it is possible to do parameter estimation using standard stellar spectroscopy pipelines on these stars, and whether different parts of the spectra deliver similar stellar parameters (or as similar as do different parts of the spectra of real stars, which is not always that similar!). We can also compare to The Cannon, which is also a generative model (though not a deep one!).

2019-04-22

time-domain speckle models

I spent time on the long weekend and today working through the front parts of a new paper by Matthias Samland (MPIA) who is applying ideas we used for our pixel-level model for Kepler data to high-contrast (coronographic) imaging. Most high-performance data pipelines for coronograph imaging model the residual speckles in the data with a data-driven model. However, most of those models are spatial models: They are models for the imaging or for small imaging patches. They don't really capture the continuous time dependence of the speckles. In Samland's work, he is building temporal models, which don't capture the spatial continuity but do capture the time structure. The best possible methods I can imagine would capture some of both. Or really the right amount of both! But Samland's method is good for working at very small “inner working angle” where you don't have much training data for a spatial model because there just isn't that much space very near the null point.

2019-04-18

spectroscopy, Earth-finding

At mid-day I spun out an extended fantasy with Andy Casey (Monash) about a general and generalizable spectroscopic software toolkit that could do data analysis, spectral extraction, parameter estimation, and radial-velocity measurement in arbitrary two-dimensional spectrograph imaging. One of the related ideas is to build low-dimensional descriptions of the calibration of the spectrograph to pool calibration data and reduce pressure on calibration observations. Another idea is to avoid going to one-d spectra, except when necessary (almost never necessary). Another is never to deconvolve to high resolution (spectro-perfectionism is a deconvolve–reconvolve method, to which I object). Etc. It would be a lot of work, but it could revolutionize the business.

Late in the day I had a conversation with Megan Bedell (Flatiron) about possible high-level goals for the Terra Hunting Experiment, which is finding Earth analogs. Some of the goals might be about discovery rate (or future-discounted discovery rate) and some might be about statistics (what is the abundance of Earth analogs?). Different high-level objectives lead to different operational decisions. Interesting. And hard.

2019-04-17

binary stars and lots more

Today was a very very special Stars meeting, at least from my perspective! I won't do it justice. Carles Badenes (Pitt) led us off with a discussion of how much needs to be done to get a complete picture of binary stars and their evolution. It's a lot! And a lot of the ideas here are very causal. For example: If you find that the binary fraction varies with metallicity, what does it really vary with? Since, after all, stellar age varies with metallicity, as do all the specific abundance ratios. And also star-formation environment! It will take lots of data and theory combined to answer these questions.

Andreas Flörs (ESO) spoke about the problem of fitting models to the nebular phase of late-time supernovae, where you want to see the different elements in emission and figure out what's being produced and decaying. The problem is: There are many un-modeled ions and the fits to the data are technically bad! How to fix this. We discussed Gaussian-process fixes, both stationary and non-stationary. And also model elaboration. And the connection between these two!

Helmer Koppelman (Kapteyn) showed some amazing structure in the overlap of ESA Gaia data and various spectroscopic surveys (including LAMOST and APOGEE and others). He was showing visualizations in the z-max vs azimuthal-action plane. We discussed any ways it could be selection effects. It could be; it is always dangerous to plot the data in derived (rather than more closely observational) properties.

Tyson Littenberg (NASA Marshall) told us about white-dwarf–white-dwarf (see what I did with dashes there?) binaries in ESA LISA. He has performed an information-theoretic analysis for a realistic Milky Way simulation. He showed that many binaries will be very well localized; many thousands will be clearly detected; and some will get full 6-d kinematics because the chirp mass will be visible. Of course there are simplifying assumptions about the binary environments and accelerations, but there is no doubt that it will be incredible. Late in the day we discussed how you might model all the sea of sources that aren't individually detectable. But that said, everything to many tens of kpc in the MW will be visible, so incompleteness isn't a problem until you get seriously extragalactic. Amazing!

2019-04-16

binaries

Great Astro Seminar today by Carles Badenes (Pitt), who has been studying binary stars, in the regime that you only have a few radial-velocity measurements. In this regime, you can tell that something is a binary, but you can't tell what its period or velocity amplitude is with any precision (and often almost no precision). He showed results relevant to progenitors of supernovae and other stellar explosions, and also exoplanet populations. Afterwards, Andy Casey (Monash) and I continued the discussion over drinks.

2019-04-15

topological gravity; time domain

Much excellent science today. I am creating a Monday-morning check-in and parallel working time session for the undergraduates I work with. We spoke about box-least-squares for exoplanet transit finding, about FM-radio demodulators and what they have to do with timing approaches to timing-based planet finding, scientific visualization and its value in communication, and software development for science.

At lunch, the Brown-Bag talk (my favorite hour of the week) was by two CCPP PhD students. Cedric Yu (NYU) spoke about the topological form of general relativity. As my loyal reader could possibly know, I love the reformulation of GR in which you take the square-root of the metric (the tetrad, in the business). Yu showed that if you augment this with some spin fields, you can reformulate GR entirely in terms of topological invariants! That's amazing and beautiful. It relates to some cool things relating geometry and topology in old-school math. Oliver Janssen (NYU) spoke about the wave function of the Universe, and what it might mean for the initial conditions. There is a sign ambiguity, apparently, in the argument of an exponential in the action! That's a big deal. But the ideas are interesting because they force thinking about how quantum mechanics relates to the entire Universe (and hence gravity).

In addition to all this, today was the first-ever meeting of the NYU Time Domain Astrophysics group meeting, which brings together a set of people at NYU working in the time domain. It is super diverse, because we have people working on exoplanets, asteroseismology, stellar explosions, stellar mergers, black-hole binaries, tidal disruption events, and more. We are hoping to use our collective wisdom and power to help each other and also influence the time-domain observing projects in which many of us are involved.

2019-04-12

the Snail

As my loyal reader knows, I like to call the phase spiral in the vertical structure of the disk (what's sometimes called the Antoja spiral) by the name The Snail. Today I discussed with Suroor Gandhi (NYU) how we might use the Snail to measure the disk midplane, the local standard of rest (vertically), the mass density of the disk, and the run of this density with Galactocentric radius. We have a 9-parameter model to fit, in each angular-momentum slice. More as this develops!

2019-04-11

Sagittarius dark matter?

It's a bad week, research-wise. But I did chat with Bonaca (Harvard) this morning, and she showed that it is at least possible (not confirmed yet, but possible) that the dark substructure we infer from the GD-1 stream has kinematics consistent with it having fallen into the Milky Way along with the Sagittarius dwarf galaxy. This, if true, could lead to all sorts of new inferences and measurements.

Reminder: The idea is that there is a gap and spur in the stream, which we think was caused by a gravitational interaction with an unseen, compact mass. We took radial-velocity data which pin down the kinematics of that mass, and put joint constraints on mass, velocity, and timing. Although these constraints span a large space, it would still be very remarkable, statistically, if the constraints overlap the Sagittarius galaxy stream.

Philosophically, this connects to interesting ideas in inference: We can assume that the dark mass has nothing to do with Sag. This is conservative, and we get conservative constraints on its properties. Or we can assume that it is associated with Sag. This is not conservative, but if we make the assumption, it will improve enormously what we can measure or predict. It really gets at the conditionality or subjectivity of inference.

2019-04-10

the photon sphere; 6-d math

The day started with the Event Horizon Telescope press release conference, which I watched at Flatiron (but could have watched at NYU or Columbia; a huge fraction of the community was watching!). It really is a beautiful result, and the data analysis looks (on cursory inspection of the papers) to be excellent and conservative. It is just incredible that we can observe a photon sphere, if that really is what it is! It seemed like such a thing of legend and story.

Interesting to think about language: Is this the first observation of a black hole? Or image of one? I'd say not, because any image of a quasar is just as much an image of the radiation around a black hole as this is. I think maybe it is the first image of the parts where strong gravity is acting (photons are orbiting!). But these are not objections in any way to the importance of the result! Just musing on the language. In what sense is this the first time we have taken an image of a black hole? And is it that? And etc.

In the afternoon, Kate Storey-Fisher and I went to the board and got confused about 6-dimensional integrals. We need them to understand correlation-function estimators. The “RR” term in the correlation function estimators is a 6-d integral over an outer product of space with space!

2019-04-09

CMB

My only research time today was a nice astro seminar by Chris Sheehy (BNL), who convinced us that detecting primordial gravitational radiation in the b-mode polarization of the CMB at large scales will be hard but possible. It depends on some inflation physics, of course! He also showed some novel uses of what you might call “compressive sensing” to CMB foreground analyses, scooping some of my thoughts on the subject!

2019-04-08

student projects; 2-pt function estimators

Most of my research time over the weekend and today was taken up reading proposals for a funding review. That doesn't count as research, by my Rules. I don't love that part of my job. But I did get in some time with students, reading thesis chapters by Malz (NYU), planning two papers with Storey-Fisher (NYU), and discussing graduate school options with Birky (UCSB). I love these parts of my job!

In the conversation with Storey-Fisher, we set the minimal (though still very large) scope for a paper that competes or tests large-scale structure correlation-function estimators in realistic and toy data. Our issues are: We have identified biases in the standard estimators, and we (additionally) don't love the tests or arguments that say that Landy–Szalay is optimal. So we want to test them again, and also add some new estimators, from the math literature on point processes.

2019-04-05

the information theory of light curves

In Astronomical Data Group Meeting at Flatiron today, Rodrigo Luger (Flatiron) spoke about what he calls the “null space” for reconstruction of stellar surface features (or exoplanet surface features) from light curves. If you just have a rotating ball, glowing but with a surface pattern of emissivity, and you just get to see an integrated light curve, you can only reconstruct certain parts of any representation of its surface. For example, all the odd-ell modes (after ell of 1) contribute exactly zero signal! And there are other degeneracies, depending on orientation. These degeneracies are exact!

What Luger showed today is that some of these degeneracies are broken just by limb darkening! And others are broken if you have transiting planets. And if you are reconstructing a planet, others are broken by the terminator of any reflected light. All of these results and considerations will feed into an information theory of stellar and exoplanet light curves.

2019-04-04

the statistics of box least squares

Last semester, I started a project with Anu Raghunathan (NYU) on the question of how much more sensitive we could be to planets in resonances than we are in more blind searches. My loyal reader knows that I'm interested in this. I think it has a simple answer, but even if it does, some playing in this statistical sandbox is fun. Today Raghunathan and I realized that we can generate a whole set of great results around box least squares, which is the dumb (but very effective, and very easy-to-analyze; I'm a fan) method that is used to generate candidate exoplanets in many transit surveys and searches. My vague idea is to use this as a place to understand multiple hypothesis testing (the physicists' “look-elsewhere effect”) and derive analytic false-positive rates for simple noise distributions, with searches of different kinds in data of different kinds.

2019-04-03

six-volume, Fools, TOIs

I spent my science time today commenting on the first draft of a nice paper on phase-space volume by Matt Buckley (Rutgers). He shows that it is possible, in some cases, to measure the phase-space volume (six-volume) of structures in the ESA Gaia data. He wants to use Liouville's Theorem (that 6-volume is conserved) to measure the former bound masses of structures in the Milky Way halo that are now disrupted.

At Stars & Exoplanets Meeting at Flatiron, we discussed the Luger et al and Burns et al April-Fools papers. They both represent very impressive results, and are also a bit silly. On the Burns paper, we learned how to continue a spherical spectral representation down to zero radius without introducing a singularity. Reminded me of undergraduate quantum mechanics!

In addition, Bedell (Flatiron) spoke a bit about cool things that happened at #TessNinja2 last week in Chicago. Among other things, she showed a system that Foreman-Mackey (Flatiron) and collaborators set up to automatically fit the light curves of every announced TESS Object of Interest. It's hilarious: It produces a complete executable (and modifiable) Jupyter notebook for every TOI.

2019-04-02

gravitational redshifts; point processes

I went down to Princeton to give a seminar to the particle physicists about dark matter, and in particular what we know or could know from dynamical and kinematic measurements of stars. Before my talk, I had a great conversation with Oren Slone (Princeton) and Matt Moschella (Princeton) about gravitational redshifts. They have been thinking about where gravitational redshifts might be both measurable and physically interesting. Ideas include: Surfaces of stars, stars as a function of location in our Galaxy, and different parts of external galaxies. The magnitudes are tiny! So although the gravitational redshift is an incredibly direct tool of some gravitational dynamics, it is very hard to measure.

After my talk at Princeton, I got in a short but fun conversation with Jim Peebles (Princeton) on point processes and estimators of two-point functions. Peebles, after all, wrote down the first estimators of the clustering of large-scale structure. He admitted that the history is unprincipled: They more-or-less made things up! I presented the things that I have been discussing with Kate Storey-Fisher (NYU) and Alex Barnett (Flatiron) and he was interested. And intrigued. Can we make better estimators?

2019-04-01

where is the dark mass? April Fools

The day began with a call with Ana Bonaca (Harvard), in which she showed me that she can take her models of the GD-1 stream perturbation and predict the present-day location of the substructure (or dark mass) that created the perturbation. Because the model space is broad, the “error box” is large, but the fact that we have such a prediction is fun, and interesting. All this progress flows from the fact that we now have some radial-velocity data on the stream and the spur (which is the feature we think was raised by a dark-matter interaction).

On the arXiv today were the annual set of April Fools papers. My loyal reader knows that I love papers in this category when they are silly or funny but in fact contain an interesting or important calculation or inference. There were two in this category today with Flatiron origins. One was Luger et al, inferring the mean cloud cover on Earth from systematic effects in the NASA TESS imaging! Another was Burns et al, showing that instead of “cubing the sphere” (what climate modelers do to avoid spherical coordinate singularities in discretization) you can “sphere the cube” (embed a cubical simulation volume in a natively spherical-representation simulation). This latter project was ridiculous, but it showed very dramatically that they have a representation for simulating spherical domains with no singularity anywhere (and especially not at the center of the sphere, and at no angular position on the surface).

2019-03-29

#GaiaSprint, day 5

Today was the last day and wrap-up for the 2019 SB Gaia Sprint. It was quite a week! A few highlights from the wrap-up (for me, very subjective, not fair or complete) were: Schwab Abrahams (Berkeley) showed that stars which are flagged in certain ways in the Gaia data are reliably variable stars, by looking at TESS light curves. Coronado (MPIA) showed that stars with small orbital-action differences tend to also have small element-abundance differences. Brown (Leiden) and others worked on making “Gold” samples in Gaia data that make it easy for people to look at or follow up spectroscopically. Mateu (UdelaR) improved her catalog of, and meta-data on, stellar streams in the halo. El Badry (Berkeley) convincingly showed us that there is an excess of very precisely equal-mass binary stars even at very large separations. Widrow (Queen's) showed first attempts at trying to perform a regression that can be used to infer the Galactic bar density from velocity fields. Hunt (Toronto) showed velocity and density maps of a simulated disk that look very much like the features that Eilers (MPIA) and I see in the data! And Laporte (UVic) showed a great movie of the data in the phase spiral (The Snail!) that shows its beautiful and informative dependence on azimuthal action (or really vertical frequency I think!). It was a great week with great people doing great things in a great location. I'm exhausted! The wrap-up slides are available here.

2019-03-28

#GaiaSprint, day 4

Each day at the Sprint, we have a check-in, in which daily results are discussed. Today Cecilia Mateu (UdelaR) showed improvements she has made to the database or list she maintains of known or reported stellar streams in the Milky Way halo. With the encouragement of Ana Bonaca (Harvard) and the help of Adrian Price-Whelan (Princeton), she made an astropy-compatible data file that delivers coordinate transformations into the stellar stream reference frames (great-circle coordinates). This will make it much, much easier for people to perform analyses on streams and compare new detections to known objects.

At lunch, a subset of the group that discussed the ESA Gaia selection function yesterday met again to discuss the possibility of putting together a large funding proposal to create what's needed. Many interesting things came up in this discussion. One is that many more projects are enabled by the selection function. So a small investment here greatly increases the impact of Gaia. Another is that we need to have a set of clearly defined example problems that illustrate the relevant issues. Another is that many of these possible example projects need not just an observational selection function but also a 3-d dust map in the Milky Way. Is that the same project or a different one? Another is that there aren't a lot of possible funding avenues that would be appropriate in both scale and international scope. It was a valuable discussion, but I don't know where we are at the end.

The highlight of the day was a long discussion of the kinematics of the Milky Way bar with Larry Widrow (Queen's) and Ortwin Gerhard (MPE) and Christina Eilers (MPIA) and Sarah Pearson (Flatiron). We almost became convinced that we are seeing the bar at the center of the Galaxy kinematically. It appears as a quadrupole in the velocity field. But if we are seeing it, we are seeing it at the wrong angle! So there is work to do. And many of the simple ideas about what we see depend on some kind of steady-state assumption, when in practice the bar evolves on a time-scale comparable to it's rotation period. More soon!

2019-03-27

#GaiaSprint, day 3

At the Gaia Sprint, there is no formal program. It is just work, work, and more work! But we do let the participants self-organize some break-out sessions that are more like sessions in a (highly interactive) workshop. Today, we ran a session on a possible ESA Gaia DR2 selection function. There is no selection function, and this seriously limits the science we can do with the mission and its data. I opened the session with some generalities about what a selection function should or could be and how we would use it, working from notes that Rix (MPIA) and I have been working on. I learned that we are describing it all wrong, and that we need much better and more worked-out example problems. It is very interesting to classify projects into those that do and those that don't need a selection function. Rix and I put it on our to-do list to re-work our paper outline on this.

In the sprinting part of the day, Eilers (MPIA) and I stepped back and realized that we should make all nine obvious kinematic plots of the Milky Way disk: Mean velocity (three plots), mean squared velocity (three plots) and mean velocity-velocity cross-correlation components (three plots). We started on that, and the bar looks like it just pops right out in the plot of the mean-square vertical velocity component! We are starting to realize that the things we want to plot that relate to the bar are very different from the things we want to plot that relate to the spiral arms.

2019-03-26

#GaiaSprint, day 2

After playing with visualization yesterday, Christina Eilers (MPIA) and I got the idea that perhaps the radial-velocity variations we see in the Milky Way disk might indicate density variations. In particular, does the radial-velocity field converge on high-density regions in the disk (spiral arms, say) and diverge on low-density regions (inter-arm gaps, say)? Sarah Pearson (Flatiron) came to our rescue with a nice visualization of the density and velocity fields, in which she could smoothly go from showing one to the other. And indeed, our intuitions were justified, at least qualitatively.

In the evening check-in, Paolo Tanga (Côte d'Azur) showed some beautiful results on the ESA Gaia coordinate systems relative to other catalogs. He calls these differences "zonal corrections" for historical reasons! I asked him how he knows which of the coordinate systems is best, and he said: In the best frame, the asteroids will travel on calculable trajectories. (I would say gravitational trajectories, but for asteroids, radiation pressure and other forces are relevant too!) So the best coordinate system will be Newtonian in the Solar System! Of course given frame dragging, and strictly speaking, Newtonian for the Solar System will not be Newtonian for the Galaxy! I asked about that and it led to some discussion with Larry Widrow (Queen's). I have much to say about all this, but I'm not yet ready to say it out loud.

2019-03-25

#GaiaSprint, day 1

Today was the first day of the 2019 Santa Barbara Gaia Sprint at KITP. My goal for the week is to write a paper with Christina Eilers (MPIA), Hans-Walter Rix (MPIA), Sarah Pearson (Flatiron) and others on non-axi-symmetries in the Milky Way disk, possibly including spiral arms and the bar. I'd like to say we made a lot of progress today! Maybe we did, but it was progress in the form of making very complex code changes to improve visualization and plotting and then deciding that they only made the figures less good. Grr.

The Sprint has no required program and very few plenary activities. However, each day of the Sprint ends with a check-in in which people show a few results. Kareem El-Badry (Berkeley) showed some incredible stuff he has been doing with Rix on wide-separation binaries (identified as co-moving stars). He shows an excess population of wide binaries with near identical (few percent level!) masses. This is not surprising in that such populations are known at small separations. But it is surprising given that none of the explanations for the small-separation equal-mass binaries work at large separations. He did a lot of work today showing that these results are real and not something spurious in the Gaia data.

At that same check-in, Kathryn Johnston (Columbia) showed a beautiful visualization of how the local phase spiral in the Milky Way disk varies with azimuthal action. For her, the azimuthal action is a proxy for a vertical frequency; her picture is that the disk was impulsed at some time in the past, and that impulse has been winding up at different frequencies on different orbits. Beautiful.

2019-03-22

phylogeny and nucleosynthesis

In Astronomical Data Group Meeting, Megan Bedell (Flatiron) talked about possible uses of phylogenetic methods for looking at the chemical evolution of stars in the Milky Way. That's an idea that has been tried a few times, but she has a new twist: There are methods that take explicit account of time, and there are now many stars for which we have precise ages. I'm not sure, in the end, that methods from biology will translate directly to astrophysics, but I bet the sandbox is worth digging in a little bit. This connects to my thoughts and hopes of building a data-driven model of nucleosynthesis.

Before that, in conversations (also) with Bedell, I down-selected my ideas for the NASA Exoplanets Research Program call. The stage-1 proposals are due next week, so this is about as late as I can leave it. My plan is to propose something about stellar spectral variability and the new NASA investments in extreme precision radial-velocity hardware. Watch my GitHub repos for details.

2019-03-21

avoiding active stars

Today Megan Bedell (Flatiron) and I had a telecon with the Terra Hunting Experiment team to discuss target selection. The idea is to use existing good data to choose a small set of (40-ish) stars to study for ten years. That's ambitious, which is (of course) why I love it! But how to select these stars? Our big argument today was about magnetic activity, which has some interesting properties. One is that it generally declines with age, so maybe we could just choose the stars to be not-young? Another is that there are activity cycles, so determination of low activity now might not guarantee low activity over the next decade.

One thing this caused me to ask (inside my head, that is) was: If you know that activity varies over time with some stochastic time scales, and if you need to be observing only low-activity stars, what does this imply for an adaptive observing program? That's a very nice question in experimental design. I smell the multi-armed bandit coming around the corner.

2019-03-20

noise in The Cannon, noise in EXPRES

The highlight of my research day was a long conversation with Adam J. Wheeler (Columbia) about error propagation in The Cannon. He and I discussed various ways to propagate uncertainties, which come jointly from the noisy spectra and the noisy labels that are used in the training step. There are more and more brutal approximations. We had this discussion in the context of a graphical model, which Adam (completely independently of me) had drawn just like mine. I ended up proposing that he take a jackknife approach. However, it might be possible to go fully Bayesian, something I didn't think was possible a year or two ago.

We also discussed my crazy idea to build a fully non-parametric but always locally linear version of The Cannon. This would have great properties, especially as regards noise propagation and inference.

Okay, another amazing thing about the day: In Stars and Exoplanets Meeting at Flatiron, John Brewer (Yale) showed us some brand-new data from the EXPRES spectrograph for making extreme-precision radial-velocity measurements. He showed two stars that look like they are showing empirical scatter (away from a Kepler curve) of roughly 0.4 m/s. That would be ground-breaking precision and an incredibly good start for this important new instrument. Now I have to find a way to worm my way onto that team...?

2019-03-19

GD-1 spur velocities

Early in the morning I spoke with Ana Bonaca (Harvard) about the amazing velocity data she has taken for stars in the GD-1 stellar stream in the Milky Way halo. As my loyal reader knows, this stream has a spur of stars off the main branch that are consistent with being perturbed away by a massive perturber that flew by. Now she has precise velocity information about stars in the main body of the stream and in the spur. Contrary to our naive predictions, the stream and spur have very similar velocities. But the spur appears to be far lower in velocity dispersion. Is this real? And is this what we expect? We didn't predict it in our theoretical paper on the subject, but then again we didn't look! I can see some arguments that it might be true. Bonaca also sees many other things in the data, like that the GD-1 stream membership is improved dramatically when we have metallicity information.

2019-03-18

No likelihood!

Today was the first day of a Likelihood-Free Inference workshop at Flatiron, run by Foreman-Mackey (Flatiron) and others. The day started off with an absolutely beautiful introduction by Kyle Cranmer (NYU) about many methods for likelihood-free inference. He started with conceptual matters, and some beautiful examples from intro physics and also from the Large Hadron Collider (where he has been a leader in doing sophisticated inferences). And then he went on a whirlwind tour of methods and ideas.

But my two big take-aways were the following (and these two things aren't even slightly comprehensive or fair to Cranmer's deep and wide presentation): One is that he gave a great statement of the general problem of LFI, where there are, in addition to the data, parameters, nuisance parameters, and per-datum latent variables. He pointed out that even if you are a frequentist you can (in principle) integrate out the latents, because your model puts a distribution (generally) over the per-datum latents. (That's an important point, which I should emphasize in my data-analysis class.) And of course the idea of LFI is that you can't actually compute this integrated likelihood (probability of the data given parameters and nuisances, integrating out latents) in practice. You can only produce joint samples of the data and the latents. So though you are permitted to integrate out the latents, you aren't capable of integrating them out (because, like in cosmology, say, your model is a baroque and expensive simulation).

The other take-away was an incredible idea, which I hadn't learned before (maybe I should read the literature!), which is that sometimes you can set things up (using discriminators—like classifiers—oddly) such that you can compute or approximate the likelihood ratio between two models, even if you can't compute the likelihood of either one. Cranmer said two interesting things about this: One is that if you have a scalar function of the data (like a classification score from a classifier) that is monotonically related to the likelihood ratio, there are ways to calibrate it into a likelihood ratio. The other is that if you need to compute something (the likelihood ratio in this case) you don't necessarily need to compute it by computing something far far harder to compute (the two individual likelihoods in this case); he attributed this sentiment to Vapnik. You can do a lot of inference just with likelihood ratios; you rarely need true likelihoods, so this idea has legs.

2019-03-15

#mwmw, day 2

The Milky Way Mapper meeting continued today. Both yesterday and today there were great presentations on asteroseismology in NASA TESS that might impact our target selection. The Hekker group here in Goettingen is doing a number of relevant things, including feature engineering for long-period asteroseismological inference in short time streams (which connects to things we have been thinking about for stellar rotation in TESS), and fully automated delivery of asteroseismic parameters for red giants. Short presentations on all this were given by Bell, Kuszlewicz. and Themessl (all Goettingen). I had a good discussion with all of this crew at lunch today, where they were pretty pessimistic about my ideas about getting asteroseismological parameters out of the ESA Gaia data (in some late DR).

In a coffee break, Rix (MPIA) asked me a nice homework problem about time-domain spectroscopy, inspired by things he is thinking about with Dani Maoz (TAU): If you have exactly two observations of a star, separated by time interval Δt, and these deliver (a precise) difference in radial velocity Δv, what can you conclude about the orbital parameters of that star? Assume the star is orbiting a dark companion on a circular orbit, and your measurements are so precise, the measurement uncertainty is irrelevant.

In a discussion led by Bird (Vandy) about signal-to-noise, Blanton (NYU) pointed out that the APOGEE detectors are up-the-ramp, so we can sub-frame them to a shorter exposure without making any approximations! That's incredible! It means that we could be doing time-domain astronomy with APOGEE on time-scales that are not accessible to any optical spectrograph. I got super-excited about this, and tried to convince Nidever (Montana) to get in there and make that change. He opined that it might not be trivial. However, the information is definitely there, latent. So my question is: What's the killer app for such technology? We can look at spectral variability information on essentially any time scale from seconds to hours. Woah.

2019-03-14

#mwmw, day 1

Today was the first day of the Milky Way Mapper Workshop, at the Max Planck for Solar System Research in Goettingen. The meeting is about points of target selection, operations, commissioning, and planning. I am very excited about Milky Way Mapper, which is part of the SDSS-V family of projects; it will take infrared and optical spectra of millions of stars. From my perspective a few important things happened at the meeting (note the subjectivity and unfairness of this; it is not a summary):

MWM will operate in a robotic mode, with robotic fiber positioners. This permits us to observe enormous numbers of stars, but it means that our default calibration strategy of arcs and flats between exposures that we have used in SDSS through SDSS-IV will not be tenable. That's good! Because it causes us to do some commissioning work at the start where we quantitatively analyze the calibration strategy.

We discussed principles underlying target-selection in our various target categories. Hans-Walter Rix (MPIA) and I intend to write a general paper for the astrophysics community about this question, because there are some hard-won lessons from previous projects and things we and others have done wrong. I will say more about this in future blog posts as I try to write some of it up, but the extremely important underlying principle is the likelihood principle: If information comes through the likelihood function, then you have to select your targets such that, at the end of the day, you can write down a computationally tractable likelihood function for the parameters of interest. That's perhaps a Duh! point, but I'd like to point out that many of the complex, multi-stage projects (like RV surveys, or time-domain follow-up spectroscopic projects) fail to meet this requirement! More on this over the next weeks.

I learned a few crazy simple things today. One is that SDSS-IV APOGEE is taking multiple hot-star standards per plate! That means that the survey has, through its calibration work, created a huge time-domain survey of hot stars over a huge part of the sky. That's pretty important for science. And at this point, they have not been fully exploited as a scientific project. It's many thousands of stars!

Another crazy thing is that the SDSS projects have obtained enormous numbers of white-dwarf spectra, sometimes deliberately and sometimes by accident. These cover large parts of the white-dwarf sequence in ESA Gaia data, and this sequence contains lots of informative and intriguing structure. That suggests an interesting Gaia Sprint project.

2019-03-13

TESS CPM; inducing points

NASA TESS proposals are due tomorrow! I spent most of my morning writing with Tyler Pritchard (NYU), who has written almost all of our proposal to perform image differencing and produce transient-alert light curves with TESS. I worked on the descriptions of the philosophy and characteristics of the CPM model, which delivers very good performance in TESS-like situations (think K2 and Kepler). Not everything I have written is going to survive, though, because the proposal strict page limit is 4 pages (and a 800-character abstract, which is very hard!).

Late in the day I had a good conversation with Lauren Anderson (NYU) about how inducing points can be used to lower the rank of the linear-algebra operators with Anderson. We talked it out, about how with the control points your matrix can only have a rank as large as the control-point set, and (even better) the control points can be placed in the space to create symmetries that speed computation. But I had an epiphany during the conversation, which is Duh in retrospect: The low-rank approximation is an approximation to the information tensor not the covariance matrix. (These are just inverses of each other.)

2019-03-12

nothing

I got no research done at all today! The closest I came was a great lunch with my former student Morad Masjedi (Goldman Sachs), who never fails to provide me with an extremely interesting window into the world of finance (an industry to which some fraction of my group members go).

2019-03-11

TESS; correlation-function estimators are biased?

I spent research time on the weekend and today working on a NASA TESS proposal, led by Tyler Pritchard (NYU) in which we deliver image differences (using the CPM) and light curves (with assistance from ZTF public data). It is a big project but the proposal has a four-page limit, so the writing isn't trivial!

This morning, Kate Storey-Fisher (NYU) and I met with Alex Barnett (Flatiron) to discuss estimators for the correlation function. Barnett has discovered that the cosmological literature on the correlation function makes essentially no reference to the mathematics literature on point processes, and the point-process literature makes no reference to the Landy–Szalay estimator or anything like it! So there is work to do.

But of great interest today, Barnett has discovered that the Landy–Szalay estimator (and even the more trivial estimators) that we use in cosmology is non-trivially biased! It does not estimate the mean of the correlation function in an annular separation bin! It estimates a different integral of the correlation function. This has potentially disastrous consequences for things we have done related to the baryon acoustic feature.

2019-03-08

Antoja spiral

At the Astronomical Data group meeting and the Dynamics group meeting I spoke a bit about the Suroor Gandhi (NYU) project to make a sandbox to look at mixing of stellar populations in phase space. As I spoke about them, I realized that we might be able to infer a lot about the Milky Way from the Antoja spiral. For one, the overall aspect ratio of the spiral tells you something about the mass density in the disk (because that aspect ratio relates a distance to a velocity, and that will make an acceleration). For another, the pitch angle as a function of radius should tell you the scale height. And so on! This mirrors things Kathryn Johnston (Columbia) has been saying at me for a while, but I am a slow learner! The nice thing is that these projects might be possible even with extremely simplistic, toy simulations; some of the arguments are very general!

Side note: These arguments are very related to the project code-named Chemical Tangents that I have been talking about for a while: They are methods for just seeing the orbit structure at first order in the data. Unlike in, say, Jeans modeling or virial estimates, where the orbit structure only appears in second-order statistics.

2019-03-07

mixing sandbox

The most fun point in my research day today was a conversation with Suroor Gandhi (NYU) about making a sandbox for testing or developing intuitions about mixing in the disk. The idea is to put stars of different kinds (think chemical abundances or tags of some kind) localized in phase space, and see how the kinds mix in orbit space to illuminate tori (or equivalents). We started down this path because the mixing isn't totally intuitive, and it isn't clear always how long it will take.

2019-03-06

augmented reality, M dwarfs, and TESS

At Stars Meeting at Flatiron, Wolfgang Kerzendorf (NYU) showed a nice demo of an idea we have been kicking around, which is to use augmented reality to visualize data in the space. It was just a demo, but it was promising! After that, Rocio Kiman (CUNY) showed her work on M-dwarf and L-dwarf age indicators and their inter-relations. She showed that flaring dwarfs tend to be larger in radius, which might be evidence of having magnetic pressure changing their structures.

In the afternoon, I discussed NASA TESS proposal ideas with Tyler Pritchard (NYU) and Maryam Modjaz (NYU), who are interested in using TESS to do supernova and explosive-transient science. I was planning on doing something with the CPM that was developed by Dun Wang (formerly NYU) for making image differences in TESS-like time-domain data. We tentatively decided to join forces, and we will properly decide tomorrow.

2019-03-05

inclusivity

It was a low-research day today but Boris Leistedt (NYU) and I got in a discussion of how and why we run meetings and workshops and what amount of time to spend on that as a postdoc. We also talked about inclusivity at such meetings and the issues around that; especially bringing on-board collaborators who are suspicious about diversity-related desiderata. It's a complex set of issues, especially because it is where science meets ethics, and everyone feels judged or threatened.

2019-03-04

cosmological distance measures

This morning I met with Gus Beane (Penn) to discuss our work on a possible update of my pedagogical notes on cosmological distance measures. It needs to be updated because the world models in that note are so out of date! (See, for example, footnote 1 on page 2.) And the discussion about what's important doesn't map well onto what's important in today's cosmological context, where the dark energy has internal complexity. We discussed the scope or form for an update and haven't decided whether it is a total re-write, a revision, or an appendix. But whatever we decide, the first order of business is to find out what parameterizations are currently in use for the dark energy, and update all the equations, and figure out which of the analytic results survive the update (very few will, I think). Of course it's hard to predict where things are going in the future, so I don't know what we should really concentrate on.

2019-03-01

earthshine; acoustic feature

In the Astronomical Data Group Meeting at Flatiron, Rodrigo Luger (Flatiron) showed his work (in collaboration with a few others) that is heading towards using the earth-shine scattered light in the NASA TESS focal plane to reconstruct the continents and cloud cover on the rotating Earth. This project is something of a joke, but it puts to the test some ideas that are important for the future of mapping the surfaces of directly imaged exoplanets. We discussed the approximations that Luger and team are making for tractability of the inference. My position is that they should make brutal approximations and go easy on themselves mathematically!

In that same meeting, Kate Storey-Fisher (NYU) showed that she can, with her new method for estimating the correlation function, reproduce the SDSS LRG measurements of the Baryon Acoustic Feature, where it was first discovered (by me, among many others)! Her code works and now to show that we can make the measurement with far less effective model complexity and far less dependence on simulations to get uncertainty estimates. She has her killer app working and we are now ready to write a paper.

2019-02-28

#tellurics, day 4

Today was the last day and wrap-up from the Telluric Line Hack Week at Flatiron. What an impressive meeting it was; I learned a huge amount. Here are a few highlights from the wrap-up, but I warn you that these highlights are very subjective and non-representative of the whole meeting! If you want to see more, the wrap-up slides are here.

The most surprising thing to me—though maybe I shouldn't be surprised—was the optimism expressed at the wrap-up. The theoretical modelers of atmospheric absorption were optimistic that data-driven techniques could fill in the issues in their models, and the data-driven modelers were optimistic that the theory is good enough to do most of the heavy lifting. That is, there was nearly a consensus that telluric absorption can be understood to the level necessary to achieve 10-cm/s-level radial-velocity measurements.

Okay maybe just as surprising to me was the demos that various people showed of the Planetary Spectrum Generator that can take your location, a time, and an airmass, and make a physical prediction for the tellurics you will see, even broken down by molecular species. It is outright incredible, and remarkably accurate. It is obvious to me that our data-driven techniques would be much better applied to residuals away from this PSG model. That's an example of the kind of hybrid methods many participants at the meeting were interested in exploring.

One of the main things I learned at the meeting (and I am embarrassed to say this, since in retrospect it is so damned obvious) came from Sharon X Wang (DTM): Even if you have a perfect tellurics model, dividing it out even from your extremely high signal-to-noise spectrum is not exactly correct! The reason is duh: The spectrum is generated by a star times tellurics, convolved with the LSF. That's not the same as the LSF-convolved star times the LSF-convolved tellurics. That is a bit subtle, but seriously, Duh! Foreman-Mackey, Bedell, and I spoke a tiny bit about the point that this subtlety could be incorporated into wobble without too much trouble, and we might need to do that for infrared regions of the spectrum, where the tellurics are very strong. We have gotten away with the wobble approximation because HARPS is high resolution, and in the visible.

And finally (but importantly for me), many of the participants tried out the wobble model or understood it or applied it to their data. We have new users and the good ideas in that code (the very simple, but good, ideas) will propagate into the community. That's very good for us; it justifies our work; and it makes me even more excited to be part of the EPRV community.

2019-02-27

#tellurics, day 3

On the third day of Telluric Line Hack Week, I had many great conversations, especially with co-organizer Cullen Blake (Penn), who has had many astronomical interests in his career and is currently building the CCD part of the near-future NEID spectrograph. In many ways my most productive conversation of the day was with Mathias Zechmeister (Göttingen) about how we combine the individual-order radial-velocity measurements with wobble into one combined RV measurement. He asked me detailed questions about our assumptions, which got me thinking about the more general question. Two comments: The first is that we officially don't believe that there is an observable absolute RV. Only relative RVs exist (okay, that's a bit strong, but it's my official position). The second is that once you realize that you will be inconsistent (slightly) from order to order, you realize that you might be inconsistent (slightly) on all sorts of different axes. Thus, the RV combination is really self-calibration of radial-velocity measurements. If we re-cast it in that form, we can do all sorts of new things, including accept data from other spectrographs, account for biases that are a function of weather, airmass, JD, or barycentric correction, and so on. Good idea!

Despite the workshop, we still held the weekly Stars Meeting at Flatiron, and I am sure glad we did! Sharon X Wang (DTM) gave a summary of what we are doing at #tellurics, Dan Tamayo (Princeton) told us about super-principled numerical integrations that are custom-built for reproducibility (which is crazy hard when you are doing problems that are strongly chaotic), and Simon J Murphy (Sydney) told us about a crazy binary star system with hot, spotty stars. The conversation in the meeting pleased me: These meetings are discussions, not seminars. The crowd loves the engineering, computing, and data-analysis aspects to the matters that arise and we are detail-oriented!

2019-02-26

deep generative models

My only real research today was a short conversation with Soledad Villar (NYU) about generative models. She did a nice experiment in which she tried to generate (with a GAN) a two-dimensional vector with a one-dimensional vector input; that is, to generate at a higher dimension than the input space. It didn't work well! That led to a longer discussion of deep generative models. I opined that GANs have their strange structure to protect the generator from having to actually have support in its generative space on the actual data. And she showed me some new objective functions that create other kinds of deep generative models that might look a lot more like likelihood optimizations or something along those lines. So we decided to try some of those out in our noisy-training, de-noising context.

2019-02-25

#tellurics, day 1

Today was the first day of the Telluric Line Hack Week at Flatiron. We got an amazing crowd to New York, to discuss (and, of course, hack on) some pretty technical matters. But of course this is really about extremely high precision radial-velocity spectroscopy, and this is a community that is detail-oriented, technical, and careful!

The first day was a get-to-know-each-other day, in which we introduced ourselves, and then talked through existing projects, data sets, and instruments. I learned a huge amount today; I'm reeling! Here are a few very subjective highlights:

In the introductions, some common themes appeared. For example, many people using physical models for tellurics want to become more data-driven, and people using data-driven techniques want to be more physics-motivated. So there is a great opportunity this week for hybrid methods, that make use of the physical models, but only use data-driven approaches to model residuals away from the physical models.

Information theory came up more than once; we might do a break-out on this. In particular, we discussed the point (that I love) that what is traditionally done in fitting for RVs is an approximation to the Right Thing To Do (tm), possibly with slightly more robustness. Bedell and I really really ought to write a paper on this! But it is interesting and non-trivial to understand what techniques saturate measurement bounds, and under what assumptions. Unfortunately you can't ask these questions without making very strong assumptions.

In the discussion of hardware details, I was even more motivated than usual to say that we ought to be doing our RV fitting in the two-dimensional spectrograph data (rather than extracting to one dimensional spectra first). I was surprised to learn that many of the hardware people in the room agreed with that! So this seems like a productive direction to start looking.

Sharon Wang (DTM) is doing some interesting work trying to figure out what is really the noise floor from unmodeled telluric features in the atmosphere. That is a great question! She is asking it to bolster or criticize or set the context for going to space. Should we be doing RV in space?

Very excitingly for me, there was lots of enthusiasm in the room for learning about and trying wobble, which is Bedell's method and software for simultaneous data-driven fitting of tellurics and star. The discussion at the end of the day was all about this method, and the questions in the room were excellent, awesome, and frightening. But if all goes well we will launch quite a few projects this week.

2019-02-22

islands of stability; regression

[I was out sick for a few days]

In the weekly Dynamics meeting at Flatiron, Tomer Yavetz (Columbia) gave a very nice explanation for why stellar streams (from, say, disrupting globular clusters) in certain parts of phase space don't appear thin, which is an empirical result from simulations found by Pearson and Price-Whelan a few years ago. He shows that, near resonances in a non-integrable potential, stars that are just inside the resonant islands have average frequencies (beacause they orbit the resonance, in some sense) that more-or-less match the resonant frequencies, but stars just outside the separatrix-bordered island have average frequencies that are quite different. So a tiny change in phase space leads to a large change in mean frequencies and the stream doesn't appear coherent after even a very short time. That's a really nice use of theoretical ideas in dynamics to explain some observational phenomena.

I also gave the first of my computational data-analysis classes. I talked about fitting and regression and information and geometry. I had a realization (yes, whenever I teach I learn something), which is that fitting and regression look very similar, but they are in fact very different: When you are fitting, you want to know the parameters of the model. When you are regressing, you want to predict new data in the data space. So using a Gaussian Process (say) to de-trend light curves is regression, but when you add in a transit model, you are fitting.