2019-09-30

calibrating spectrographs; volcanic exo-moons

I had a conversation with Ana Bonaca (Harvard) early today about the sky emission lines in sky fibers in Hectochelle. We are trying to understand if the sky is at a consistent velocity across the device. This is part of calibrating or really self-calibrating the spectrograph. It's confusing though, because the sky illuminates a fiber differently than the way that a star illuminates a fiber. So this test only tests some part of the system.

At the Brown-bag talk, Bob Johnson (Virginia) spoke about exo-moons and in particular exo-Ios. Yes, analogs of Jupiter's moon Io. The reason this is interesting is that Io interacts magnetically and volcanically with Jupiter, producing an extended distribution of volcanically produced ions in Jupiter's magnetic field. It is possible that transmission spectroscopy of hot Jupiters is being polluted by volcanic emissions of very hot moons! That would be so cool! Or hot?

2019-09-27

scooped!

My loyal reader knows that earlier this week I got interested in (read: annoyed with) the standard description of the optimal extraction method of obtaining one-dimensional spectra from two-dimensional spectrograph images, and started writing about it on a trip. On return to New York, Lily Zhao (Yale) listened patiently to my ranting and then pointed out this paper by Zechmeister et al on flat-relative extraction, which (in a much nicer way) makes all my points!

This is a classic example of getting scooped! But my feeling—on learning that I have been scooped—was of happiness, not sadness: I hadn't spent all that much time on it; the time I spent did help me understand things; and I am glad that the community has a better method. Also, it means I can concentrate on extracting, not on writing about extracting! So I found myself happy about learning that I was scooped. (One problem with not reading the literature very carefully is that I need to have people around who do read the literature!)

2019-09-26

the statistics of box least squares

I had a quick pair-coding session with Anu Raghunathan (NYU) today to discuss the box least squares algorithm that is used so much in finding exoplanets. We are looking at the statistics of this algorithm, with the hope of understanding it in simple cases. It is such a simple algorithm, many of the things we want to know about uncertainty and false-positive rate can be determined in closed form, given a noise model for the data. But I'm interested in things like: How much more sensitive is a search when you know (in advance) the period of the planet? Or that you have a resonant chain of planets? These questions might also have closed-form answers, but I'm not confident of them, so we are making toy data.

2019-09-25

optimal extraction; adversarial attacks

On the plane home, I wrote words about optimal extraction, the method for spectral analysis used in most extreme precision radial-velocity pipelines. My point is so simple and dumb, it barely needs to be written. But if people got it, it would simplify pipelines. The point is about flat-field and PSF: The way things are done now is very sensitive to these two things, which are not well known for rarely or barely illuminated pixels (think: far from the spectral traces).

Once home, I met up with a crew of data-science students at the Center for Data Science to discuss making adversarial attacks against machine-learning methods in astronomy. We talked about different kinds of machine-learning structures and how they might be sensitive to attack. And how methods might be made robust against attack, and what that would cost in training and predictive accuracy. This is a nice ball of subjects to think about! I have a funny fake-data example that I want to promote, but (to their credit) the students want to work with real data.

2019-09-24

goals: achieved

I achieved my goals for Terra Hunting Experiment this week! After my work on the plane and the discussion we had yesterday, we (as a group) were able to draft a set of potentially sensible and valuable high-level goals for the survey. These are, roughly, maximizing the number of stars around which we have sensitivity to Earth-like planets, delivering statistically sound occurrence rate estimates, and delivering scientifically valuable products to the community. In that order! More about this soon. But I'm very pleased.

Another theme of the last two days is that most or maybe all EPRV experiments do many things slightly wrong. Like how they do their optimal extraction. Or how they propagate their simultaneous reference to the science data. Or how they correct the tellurics. None of these is a big mistake; they are all small mistakes. But precision requirements are high! Do these small mistakes add up to anything wrong or problematic at the end of the day? Unfortunately, it is expensive to find out.

Related: I discovered today that the fundamental paper on optimal extraction contains some conceptual mistakes. Stretch goal: Write a publishable correction on the plane home!

2019-09-23

categorizing noise sources; setting goals

Today at the Terra Hunting Experiment Science Team meeting (in the beautiful offices of the Royal Astronomical Society in London) we discussed science-driven aspects of the project. There was way too much to report here, but I learned a huge amount in presentations by Annelies Mortier (Cambridge) and by Samantha Thompson (Cambridge) about the sources of astrophysical variability in stars that is (effectively) noise in the RV signals. In particular, they have developed aspects of a taxonomy of noise sources that could be used to organize our thinking about what's important to work on and what approaches to take. I got excited about working on mitigating these, which my loyal reader knows is the subject of my most recent NASA proposal.

Late in the day, I made my presentation about possible high-level goals for the survey and how we might flow decisions down from those goals. There was a very lively discussion of these. What surprised me (given the diversity of possible goals, from “find an Earth twin” to “determine the occurrence rate for rocky planets at one-year periods”) was that there was a kind of consensus: One part of the consensus was along the lines of maximizing our sensitivity where no other survey has ever been sensitive. Another part of the consensus was along the lines of being able to perform statistical analyses of our output.

2019-09-22

setting high-level goals

I flew today to London for a meeting of the Terra Hunting Experiment science team. On the plane, I worked on a presentation that looks at the high-level goals of the survey and what survey-level and operational decisions will flow down from those goals. Like most projects, the project was designed to have a certain observing capacity (number of observing hours over a certain—long—period of time). But in my view, how you allocate that time should be based on (possibly reverse-engineered) high-level goals. I worked through a few possible goals and what they might mean for us. I'm hoping we will make some progress on this point this week.

2019-09-20

Gotham fest, day 3

Today was the third day of Gotham Fest, three Fridays in September in which all of astronomy in NYC meets all of astronomy in NYC. Today's installment was at NYU, and I learned a lot! But many four-minute talks just leave me wanting much, much more.

Before that, I met up with Adrian Price-Whelan (Flatiron) and Kathryn Johnston (Columbia) to discuss projects in the Milky Way disk with Gaia and chemical abundances (from APOGEE or other sources). We discussed the reality or usefulness of the idea that the vertical dynamics in the disk is separable from the radial and azimuthal dynamics, and how this might impact our projects. We'd like to do some one-dimensional problems, because they are tractable and easy to visualize. But not if they are ill-posed or totally wrong. We came up with some tests of the separability assumption and left it to Price-Whelan to execute.

At lunch, I discussed machine learning with Gabi Contardo (Flatiron). She has some nice results on finding outliers in data. We discussed how to make her project such that it could find outliers that no-one else could find by any other method.

2019-09-19

nada

Because of various confluences, I spent my entire day on teaching, no research.

2019-09-18

2019-09-17

optimal transport

With Suroor Gandhi (NYU) and Adrian Price-Whelan (Flatiron) we have been able to formulate (we think) some questions about unseen gravitational matter (dark matter and unmapped stars and gas) in the Milky Way into questions about transformations that map one set of points onto another set of points. How, you might ask? By thinking about dynamical processes that set up point distributions in phase space.

Being physicists, we figured that we can do this all ourselves! And being Bayesians, we reached for probabilistic methods. Like: Build a kernel density estimate on one set of points and maximize the likelihood given the other set of points and the transformation. That's great! But it has high computational complexity, and it is slow to compute. But for our purposes, we don't need this to be a likelihood, so we found out (through Soledad Villar, NYU) about optimal transport

Despite its name, optimal transport is about solving problems of this type (find transformations that match point sets) with fast, good algorithms. The optimal-transport setting brings a clever objective function (that looks like earth-mover distance) and a high-performance tailored algorithm to match (that looks like linear programming). I don't understand any of this yet, but Math may have just saved our day. I hope I have said here recently how valuable it is to talk out problems with applied mathematicians!

2019-09-16

how to model the empirical abundance space

I got in some great research time late today working with Adrian Price-Whelan (Flatiron) to understand the morphology of the distribution of stars in APOGEE–Gaia in elements-energy space. The element abundances we are looking at are [Fe/H] and [alpha/Fe]. The energy we are looking at is vertical energy (as in something like the vertical action in the Milky Way disk). We are trying to execute our project called Chemical Tangents, in which we use the element abundances to find the orbit structure of the Galaxy. We have arguments that this will be more informative than doing Jeans models or other equilibrium models. But we want to demonstrate that this semester.

There are many issues! The issue we worked on today is how to model the abundance space. In principle we can construct a model that uses any statistics we like of the abundances. But we want to choose our form and parameterization with the distribution (and its dependence on energy of course) in mind. We ended our session leaning towards some kind of mixture model, where the dominant information will come from the mixture amplitudes. But going against all this is that we would like to be doing a project that is simple! When Price-Whelan and I get together, things tend to get a little baroque if you know what I mean?

2019-09-13

precise spectroscopy

I spent my research time today writing notes on paper and then LaTeX in a document, making more specific plans for the projects we discussed yesterday with Zhao (Yale) and Bedell (Flatiron). Zhao also showed me issues with EXPRES wavelength calibration (at the small-fraction-of-a-pixel level). I opined that it might have to do with pixel-size issues. If this is true, then it should appear in the flat-field. We discussed how we might see it in the data.

2019-09-12

looking at stars in the joint domain of time and wavelength

Today I had a great conversation with Lily Zhao (Yale) and Megan Bedell (Flatiron) about Zhao's projects for the semester at Flatiron that she is starting this moth. We have projects together in spectrograph calibration, radial-velocity measurement, and time-variability of stellar spectra. On that last part, we have various ideas about how to see the various kinds of variability we expect in the joint domain of wavelength and time. And since we have a data-driven model (wobble) for stellar spectra under the assumption that there is no time variability, we can look for the things we seek in the residuals (in the data space) away from that time-independent model. We talked about what might be the lowest hanging fruit and settled on p-mode oscillations, which induce radial-velocity variations but also brightness and temperature variations. I hope this works!

2019-09-10

you can self-calibrate anything

I spoke with Christina Eilers (MPIA) early yesterday about a possible self-calibration project, for stellar element abundance measurements. The idea is: We have noisy element-abundance measurements, and we think they may be contaminated by biases as a function of stellar brightness, temperature, surface gravity, dust extinction, and so on. That is, we don't think the abundance measurements are purely measurements of the relevant abundances. So we have formulated an approach to solve this problem in which we regress the abundances against things we think should predict abundances (like position in the Galaxy) and also against things we think should not predict abundances (like apparent magnitude). This should deliver the most precise maps of the abundance variations in the Galaxy but also deliver improved measurements, since we will know what spurious signals are contaminating the measurements. I wrote words in a LaTeX document about all this today, in preparation for launching a project.

2019-09-09

enumerating all possible statistical tests

Today I got in my first weekly meeting (of the new academic year) with Kate Storey-Fisher (NYU). We went through priorities and then spoke about the problem of performing some kind of comprehensive or complete search of the large-scale structure data for anomalies. One option (popular these days) is to train a machine-learning method to recognize what's ordinary and then ask it to classify non-ordinary structures as anomalies. This is a great idea! But it has the problem that, at the end of the day, you don't know how many hypotheses you have tested. If you find a few-sigma anomaly, that isn't surprising if you have looked in many thousands of possible “places”. It is surprising if you have only looked in a few. So I am looking for comprehensive approaches where we can pre-register an enumerated list of tests we are going to do, but to have that list of tests be exceedingly long (like machine-generated). This is turning out to be a hard problem.

2019-09-06

Gotham fest, day 1

The New York City physics and astronomy departments (and this includes at least Columbia, NYU, CUNY, AMNH, and Flatiron) run a set of three Friday events in which everyone (well a large fraction of everyone) presents a brief talk about who they are and what they do. The first event was today.

2019-09-05

yes we have a sign error

I re-derived equation (11) in our paper on The Joker, in order to answer some of the questions I posed yesterday. I find that the paper does have a sign error, although I am pretty sure that the code (based on the paper) does not have a sign error. I also found that I could generalize the equation to apply to a wider range of cases, which makes me think that we should either write an updated paper or at least include the math, re-written, in our next paper (which will be on the SDSS-IV APOGEE2 DR16 data).

2019-09-04

sign error? bug or think-o?

This morning, Adrian Price-Whelan proposed that we might have a sign error in equation (11) in our paper on The Joker. I think we do, on very general grounds. But we have to sit down and re-do some math to check it. This all came up in the context that we are surprised about some of the results of the orbit fitting that The Joker does. In a nutshell: Even when a stellar radial-velocity signal is consistent with no radial-velocity trends (no companions), The Joker doesn't permit or admit many solutions that are extremely long-period. We can't tell whether this is expected behavior, and we are just not smart enough to expect it correctly, or whether this is unexpected behavior because our code has a bug. Hilarious! And sad, in a way. Math is hard. And inference is hard.

2019-09-03

finding adversarial examples

One of my projects this Fall (with Soledad Villar) is to show that large classes of machine-learning methods used in astronomy are susceptible to adversarial attacks, while others are not. This relates to things like the over-fitting, generalizability, and interpretability of the different kinds of methods. Now what would constitute a good adversarial example for astronomy? One would be classification of galaxy images into elliptical and spiral, say. But I don't actually think that is a very good use of machine learning in astronomy! A better use of machine learning is converting stellar spectra into temperatures, surface gravities, and chemical abundances.

If we work in this domain, we have two challenges. The first is to re-write the concept of an adversarial attack in terms of a regression (most of the literature is about classification). And the second is to define large families of directions in the data space that are not possibly of physical importance, so that we have some kind of algorithmic definition of adversarial. The issue is: Most of these attacks in machine-learning depend on a very heuristic idea of what's what: The authors look at the images and say “yikes”. But we want to find these attacks more-or-less algorithmically. I have ideas (like capitalizing on either the bandwidth of the spectrograph or else the continuum parts of the spectra), but I'd like to have more of a theory for this.