2019-02-28

#tellurics, day 4

Today was the last day and wrap-up from the Telluric Line Hack Week at Flatiron. What an impressive meeting it was; I learned a huge amount. Here are a few highlights from the wrap-up, but I warn you that these highlights are very subjective and non-representative of the whole meeting! If you want to see more, the wrap-up slides are here.

The most surprising thing to me—though maybe I shouldn't be surprised—was the optimism expressed at the wrap-up. The theoretical modelers of atmospheric absorption were optimistic that data-driven techniques could fill in the issues in their models, and the data-driven modelers were optimistic that the theory is good enough to do most of the heavy lifting. That is, there was nearly a consensus that telluric absorption can be understood to the level necessary to achieve 10-cm/s-level radial-velocity measurements.

Okay maybe just as surprising to me was the demos that various people showed of the Planetary Spectrum Generator that can take your location, a time, and an airmass, and make a physical prediction for the tellurics you will see, even broken down by molecular species. It is outright incredible, and remarkably accurate. It is obvious to me that our data-driven techniques would be much better applied to residuals away from this PSG model. That's an example of the kind of hybrid methods many participants at the meeting were interested in exploring.

One of the main things I learned at the meeting (and I am embarrassed to say this, since in retrospect it is so damned obvious) came from Sharon X Wang (DTM): Even if you have a perfect tellurics model, dividing it out even from your extremely high signal-to-noise spectrum is not exactly correct! The reason is duh: The spectrum is generated by a star times tellurics, convolved with the LSF. That's not the same as the LSF-convolved star times the LSF-convolved tellurics. That is a bit subtle, but seriously, Duh! Foreman-Mackey, Bedell, and I spoke a tiny bit about the point that this subtlety could be incorporated into wobble without too much trouble, and we might need to do that for infrared regions of the spectrum, where the tellurics are very strong. We have gotten away with the wobble approximation because HARPS is high resolution, and in the visible.

And finally (but importantly for me), many of the participants tried out the wobble model or understood it or applied it to their data. We have new users and the good ideas in that code (the very simple, but good, ideas) will propagate into the community. That's very good for us; it justifies our work; and it makes me even more excited to be part of the EPRV community.

2019-02-27

#tellurics, day 3

On the third day of Telluric Line Hack Week, I had many great conversations, especially with co-organizer Cullen Blake (Penn), who has had many astronomical interests in his career and is currently building the CCD part of the near-future NEID spectrograph. In many ways my most productive conversation of the day was with Mathias Zechmeister (Göttingen) about how we combine the individual-order radial-velocity measurements with wobble into one combined RV measurement. He asked me detailed questions about our assumptions, which got me thinking about the more general question. Two comments: The first is that we officially don't believe that there is an observable absolute RV. Only relative RVs exist (okay, that's a bit strong, but it's my official position). The second is that once you realize that you will be inconsistent (slightly) from order to order, you realize that you might be inconsistent (slightly) on all sorts of different axes. Thus, the RV combination is really self-calibration of radial-velocity measurements. If we re-cast it in that form, we can do all sorts of new things, including accept data from other spectrographs, account for biases that are a function of weather, airmass, JD, or barycentric correction, and so on. Good idea!

Despite the workshop, we still held the weekly Stars Meeting at Flatiron, and I am sure glad we did! Sharon X Wang (DTM) gave a summary of what we are doing at #tellurics, Dan Tamayo (Princeton) told us about super-principled numerical integrations that are custom-built for reproducibility (which is crazy hard when you are doing problems that are strongly chaotic), and Simon J Murphy (Sydney) told us about a crazy binary star system with hot, spotty stars. The conversation in the meeting pleased me: These meetings are discussions, not seminars. The crowd loves the engineering, computing, and data-analysis aspects to the matters that arise and we are detail-oriented!

2019-02-26

deep generative models

My only real research today was a short conversation with Soledad Villar (NYU) about generative models. She did a nice experiment in which she tried to generate (with a GAN) a two-dimensional vector with a one-dimensional vector input; that is, to generate at a higher dimension than the input space. It didn't work well! That led to a longer discussion of deep generative models. I opined that GANs have their strange structure to protect the generator from having to actually have support in its generative space on the actual data. And she showed me some new objective functions that create other kinds of deep generative models that might look a lot more like likelihood optimizations or something along those lines. So we decided to try some of those out in our noisy-training, de-noising context.

2019-02-25

#tellurics, day 1

Today was the first day of the Telluric Line Hack Week at Flatiron. We got an amazing crowd to New York, to discuss (and, of course, hack on) some pretty technical matters. But of course this is really about extremely high precision radial-velocity spectroscopy, and this is a community that is detail-oriented, technical, and careful!

The first day was a get-to-know-each-other day, in which we introduced ourselves, and then talked through existing projects, data sets, and instruments. I learned a huge amount today; I'm reeling! Here are a few very subjective highlights:

In the introductions, some common themes appeared. For example, many people using physical models for tellurics want to become more data-driven, and people using data-driven techniques want to be more physics-motivated. So there is a great opportunity this week for hybrid methods, that make use of the physical models, but only use data-driven approaches to model residuals away from the physical models.

Information theory came up more than once; we might do a break-out on this. In particular, we discussed the point (that I love) that what is traditionally done in fitting for RVs is an approximation to the Right Thing To Do (tm), possibly with slightly more robustness. Bedell and I really really ought to write a paper on this! But it is interesting and non-trivial to understand what techniques saturate measurement bounds, and under what assumptions. Unfortunately you can't ask these questions without making very strong assumptions.

In the discussion of hardware details, I was even more motivated than usual to say that we ought to be doing our RV fitting in the two-dimensional spectrograph data (rather than extracting to one dimensional spectra first). I was surprised to learn that many of the hardware people in the room agreed with that! So this seems like a productive direction to start looking.

Sharon Wang (DTM) is doing some interesting work trying to figure out what is really the noise floor from unmodeled telluric features in the atmosphere. That is a great question! She is asking it to bolster or criticize or set the context for going to space. Should we be doing RV in space?

Very excitingly for me, there was lots of enthusiasm in the room for learning about and trying wobble, which is Bedell's method and software for simultaneous data-driven fitting of tellurics and star. The discussion at the end of the day was all about this method, and the questions in the room were excellent, awesome, and frightening. But if all goes well we will launch quite a few projects this week.

2019-02-22

islands of stability; regression

[I was out sick for a few days]

In the weekly Dynamics meeting at Flatiron, Tomer Yavetz (Columbia) gave a very nice explanation for why stellar streams (from, say, disrupting globular clusters) in certain parts of phase space don't appear thin, which is an empirical result from simulations found by Pearson and Price-Whelan a few years ago. He shows that, near resonances in a non-integrable potential, stars that are just inside the resonant islands have average frequencies (beacause they orbit the resonance, in some sense) that more-or-less match the resonant frequencies, but stars just outside the separatrix-bordered island have average frequencies that are quite different. So a tiny change in phase space leads to a large change in mean frequencies and the stream doesn't appear coherent after even a very short time. That's a really nice use of theoretical ideas in dynamics to explain some observational phenomena.

I also gave the first of my computational data-analysis classes. I talked about fitting and regression and information and geometry. I had a realization (yes, whenever I teach I learn something), which is that fitting and regression look very similar, but they are in fact very different: When you are fitting, you want to know the parameters of the model. When you are regressing, you want to predict new data in the data space. So using a Gaussian Process (say) to de-trend light curves is regression, but when you add in a transit model, you are fitting.

2019-02-18

class prep

I spent what little research time I got today planning an informal class (to be held at Flatiron, but aimed at the whole of NYC) on computational data analysis. I am pitching it to practitioners of data analysis, so it will be pragmatic but not introductory.

2019-02-15

Ballard on exoplanets

Today was a great talk by Sarah Ballard (MIT) about the future of exoplanet research, with a concentration on the search for habitable planets. She made a strong argument for looking around low-mass stars. Of course I am suspicious that life can form around low-mass stars because the UV photons might be crucial!

She also emphasized a lot her results on two different populations of planets around M stars. I am not sure I like this description of her result: The fact that she gets a better fit with two simple populations than with one doesn't mean that there are two populations; there might just be one complex population not well described by a simple form! But it is a productive idea in the sense of generating and inspiring new projects.

And despite my criticisms and concerns, I loved this talk; it showed great vision for the future and she is taking great steps now towards that future. The methods and the process are good; my objections are all about subtle points of interpretation and discussion.

2019-02-14

cosmic-ray direction anisotropy

Today I learned from Noemi Globus (NYU) That the Pierre Auger Observatory has an amazing result in cosmic-ray science: The cosmic rays at very high energy do not arrive isotropically at Earth but instead show a significant dipole in their arrival directions. As a reminder: At most energies, the magnetic field of the Galaxy randomizes their directions and they are statistically isotropic at the Earth.

The Auger result isn't new, but it is new to me. And it makes sense: At the very highest energies (above 1019 eV, apparently) the cosmic rays start to propagate more directly through the magnetic field, and preserve some of their original directional information. But the details are interesting, and Globus believes that she can explain the direction of this dipole from the local large-scale structure plus a model of the Milky Way magnetic field. That's cool! We discussed simulations of the local large-scale structure, and whether they do or can provide reasonable predictions of cosmic-ray production.

2019-02-13

stars, dark matter, TESS

Today at Flatiron, Adrian Price-Whelan (Princeton) gave a very nice talk about using stars to infer things about the dark matter in the Milky Way. He drew a very nice graphical model, connecting dark matter, cosmology, and galaxy formation, and then showed how we can infer properties of the dark matter from stellar kinematics (positions and velocities). As my loyal reader knows, this is one of my favorite philosophical questions. Anyway, he concentrated on stellar streams and showed us some nice results on Ophiucus and GD-1 (some of which have my name on them).

The weekly Stars Meeting at Flatiron was great today. Ben Pope (NYU) showed us some terrific systematic effects in the NASA TESS data, which leads to spurious transit detections if not properly tracked. Some of these probably relate to Earthshine in the detector, about which I hope to learn more when Flatiron people return from the TESS meeting that's on in Baltimore right now.

In that same meeting, Price-Whelan showed us evidence that stars lower on the red-giant branch are more likely to have close binary companions (from APOGEE radial-velocity data). This fits in with an engulfment model (as red giants expand) but it looks like this effect must work out to pretty large orbital radius. Which maybe isn't crazy? Not sure.

And Jason Curtis (Columbia) showed amazing stellar-rotation results from TESS data on a new open cluster that we (Semyeong Oh et al) found in the ESA Gaia data. He can show from a beautifully informative relationship between rotation period and color (like really tight) that the cluster is extremely similar to the Pleiades in age. Really beautiful results. It is clear that gyrochronology works well for young ages (I guess that's a no-brainer) and it is also clear that it is way more precise for groups of stars than individuals. We discussed the possibility that this is evidence for the theoretical idea that star clusters should form in clusters.

2019-02-12

candidate Williamson

Today Marc Williamson (NYU) passed (beautifully, I might say) his PhD Candidacy exam. He is working on the progenitors of core-collapse supernovae, making inferences from post-peak-brightness spectroscopy. He has a number of absolutely excellent results. One is (duh!) that the supernovae types seem to form a continuum, which makes perfect sense, given that we think they come from a continuous process of envelope loss. Another is that the best time to type a supernova with spectroscopy is 10-15 days after maximum light. That's new! His work is based on the kind of machine-learning I love: Linear models and linear support vector machines. I love them because they are convex, (relatively) interpretable, and easy to visualize and check.

One amusing idea that came up is that if the stripped supernova types were not in a continuum, but really distinct types, then it might get really hard to explain. Like really hard. So I proposed that it could be a technosignature! That's a NASA neologism, but you can guess what it means. I discussed this more late in the day with Soledad Villar (NYU) and Adrian Price-Whelan (NYU), with whom we came up with ideas about wisdom signatures and foolishness signatures. See twitter for more.

Also with Villar I worked out a very simple toy problem to think about GANs: Have the data be two-d vectors drawn from a trivial distribution (like a 2-d Gaussian) and have the generator take a one-d gaussian draw and transform it into fake data. We were able to make a strong prediction about how the transform from the one-d to the two-d should look in the generator.

2019-02-11

truly theoretical work on growth of structure

The highlight of my day was a great NYU CCPP Brown-Bag talk by Mikhail Ivanov (NYU) about the one-point pdf of dark-matter density in the Universe, using a modified spherical-collapse model, based on things in this paper. It turns out that you can do a very good job of predicting counts-in-cells or equivalent one-point functions for the dark-matter density by considering the relationship between the linear theory and a non-linearity related to the calculable non-linearity you can work out in spherical collapse. More specifically, his approach is to expand the perturbations in the neighborhood of a point into a monopole term and a sum of radial functions times spherical harmonics. The monopole term acts like spherical collapse and the higher harmonics lead to a multiplicative correction. The whole framework depends on some mathematical properties of gravitational collapse that Ivanov can't prove but seem to be true in simulations. The theory is non-perturbative in the sense that it goes well into non-linear scales, and does well. That's some impressive theory, and it was a beautiful talk.

2019-02-08

How does the disk work?

Jason Hunt (Toronto) was in town today to discuss things dynamical. We discussed various things. I described to him the MySpace project in which Price-Whelan (Princeton) and I are trying to make a data-driven classification of kinematic structures in the thin disk. He described a project in which he is trying to build consistent dynamical models of these structures. He finds that there is no trivial explanation of all the visible structure; probably multiple things are at work. But his models do look very similar to the data qualitatively, so it sure is promising.

2019-02-07

stellar activity cycles

Ben Montet (Chicago) was visiting Flatiron today and gave a very nice talk. He gave a full review of exoplanet discovery science but focused on a few specific things. One of them was stellar activity: He has been using the NASA Kepler full-frame images (which were taken once per month, roughly) to look at precise stellar photometric variations over long timescales (because standard transit techniques filter out the long time scales, and they are hard to recover without full-frame images). He can see stellar activity cycles in many stars, and look at its relationships with things like stellar rotation periods (and hence ages) and so on. He does find relationships! The nice thing is that the NASA TESS Mission produces full-frame images every 30 minutes, so it has way more data relevant to these questions, although it doesn't observe (most of) the sky for very long. All these things are highly relevant to the things I have been thinking about for Terra Hunting Experiment and related projects, a point he made clearly in his talk.

2019-02-06

measuring the Galaxy with HARPS? de-noising

Megan Bedell (Flatiron) was at Yale yesterday; they pointed out that some of the time-variable telluric lines we see in our wobble model of the HARPS data are not telluric at all; they are in fact interstellar medium lines. That got her thinking: Could we measure our velocity with respect to the local ISM using HARPS? The answer is obviously yes, and this could have strong implications for the Milky Way rotation curve! The signal should be a dipolar pattern of RV shifts in interstellar lines as you look around the Sun in celestial coordinates. In the barycentric reference frame, of course.

I also got great news first thing this morning: The idea that Soledad Villar (NYU) and I discussed yesterday about using a generative adversarial network trained on noisy data to de-noise noisy data was a success: It works! Of course, being a mathematician, her reaction was “I think I can prove something!” Mine was: Let's start using it! Probably the mathematical reaction is the better one. If we move on this it will be my first ever real foray into deep learning.

2019-02-05

GANs for denoising; neutrino masses

Tuesdays are light on research! However, I did get a chance today to pitch an idea to Soledad Villar (NYU) about generative adversarial networks (GANs). In Houston a couple of weeks ago she showed results that use a GAN as a regularizer or prior to de-noise noisy data. But she had trained the GAN on noise-free data. I think you could even train this GAN on noisy data, provided that you follow the generator with a noisification step before it hands the output to the discriminator. In principle, the GAN should learn the noise-free model from the noisy data in this case. But I don't understand whether there is enough information. We discussed and planned a few extremely simple experiments to test this.

In the NYU Astro Seminar, Jia Liu (Princeton) spoke about neutrino masses. Right now the best limits on the total neutrino mass is from large-scale structure (although many particle physicists are skeptical because these limits involve the very baroque cosmological model, and not just accelerator and detector physics). Liu has come up with some very clever observables in cosmology (large-scale structure) that could do an even better job of constraining the total neutrino mass. I asked what is now one of my standard questions: If you have a large suite of simulations with different assumptions about neutrinos (she does), and a machinery for writing down permitted observables (no-one has this yet!), you could have a robot decide what are the best observables. That is, you could use brute force instead of cleverness, and you might do much, much better. This is still on my to-do list!

2019-02-04

investigating residuals

The day started with a conversation with Bedell (Flatiron) about projects that arose during or after the Terra Hunting Experiment collaboration meeting last week. We decided to prioritize projects we can do right now, with residuals away from our wobble code fits to HARPS data. The nice thing is that because wobble produces a very accurate generative model of the data, the residuals contain lots of subtle science. For example, covariances between residuals in flux space and local spectral slope expectations (from the model) will point to individual pixel or stitching-block offsets on the focal plane. Or for another, regressions of flux-space residuals against radial-velocity residuals will reveal spectroscopic indicators of spots and plages. There are lots of things to do that are individually publishable but which will also support the THE project.

2019-02-02

Simpson's Paradox, Li, self-calibration

On the plane home from the UK, I worked on three things. The first was a very nice paper by Ivan Minchev (AIP) and Gal Matijevic (AIP) about Simpson's Paradox in Milky Way stellar statistics, like chemodynamics. Simpson's paradox is the point that a trend can have a different sign in a subset of a population than it does in the whole population. The classic example is of two baseball players over two season: In the first season, player A has a higher batting average than player B. And in the second season, player A again has a higher batting average than player B. And yet, overall, player B has a higher average! How is that possible? It works if player A bats far more in one season, and player B bats far more in the other, and they both bat higher in that other season. Anyway, the situation is generic in statistics about stars!

The second thing I worked on was a new paper by Andy Casey (Monash) and company about how red-giant stars get Lithium abundance anomalies. He shows that Li anomalies happen all over the RGB, and even on stars descending the branch (as per asteroseismology) and thus he can show that the Li anomalies are not caused by any particular stellar evolutionary phase. That argues for either planet engulfment or binary-induced convection changes. The former is also disfavored because of the stars descending the RGB. The real triumph is the huge sample of Li-enhanced stars he has found, working with The Cannon and Anna Y. Q. Ho (Caltech). It's a really beautiful use of The Cannon as a spectral synthesis tool.

The third thing I worked on was a plan to self-calibrate HARPS (and equivalent spectrograph) pixel offsets (that is, calibration errors at the pixel level) using the science data from the instrument. That is, you don't need arcs or Fabry–Perot to find these offsets; since they matter to data interpretation, they can be seen in the data directly! I have a plan, and I think it is easy to implement.

2019-02-01

THE Meeting, day 2

Today at the Terra Hunting Experiment meeting, we got deeply into software and calibration issues. The HARPS family of instruments are designed to be extremely stable in all respects, but also monitored by a Fabry–Perot signal imprinted on the detector during the science exposures. The calibration data are taken such that the data obtain absolute calibration information (accuracy) from arc exposures and relative calibration (precision) from F—P data. There was discussion of various replacements for the arcs, including Uranium–Neon lamps, for which the atlas of lines is not yet good enough, and laser-frequency combs, which are not yet reliable.

Another big point of discussion today was the target selection. We (the Experiment) plan to observe a small number (40-ish) stars for a long time (1000-ish individual exposures for each star). The question of how to choose these targets was interesting and contentious. We want the targets to be good for finding planets! But this connects to brightness, right ascension, stellar activity, asteroseismic mode amplitudes, and many other things, none of which we know in advance. How much work do we need to do in early observing to cut down a parent sample to a solid, small sample? And how much can we figure out from public data already available? By the end of the day there was some consensus that we would probably spend the first month or so of the project doing sample-selection observations.

At the end of the day we discussed data-analysis techniques and tellurics and stellar activity. There are a lot of scientific projects we could be doing that would help with extreme-precision radial-velocity measurements. For instance, Suzanne Aigrain (Oxford) showed a toy model of stellar activity which, if correct at zeroth order, would leave an imprint on a regression of stellar spectrum against measured radial velocity. That's worth looking for. The signal will be very weak, but in a typical spectrum we have tens of thousands of pixels, each of which has signal-to-noise of more than 100. And if a linear regression works, it will deliver a linear subspace-projector that just straight-up improves radial-velocity measurements!