2019-02-14

cosmic-ray direction anisotropy

Today I learned from Noemi Globus (NYU) That the Pierre Auger Observatory has an amazing result in cosmic-ray science: The cosmic rays at very high energy do not arrive isotropically at Earth but instead show a significant dipole in their arrival directions. As a reminder: At most energies, the magnetic field of the Galaxy randomizes their directions and they are statistically isotropic at the Earth.

The Auger result isn't new, but it is new to me. And it makes sense: At the very highest energies (above 1019 eV, apparently) the cosmic rays start to propagate more directly through the magnetic field, and preserve some of their original directional information. But the details are interesting, and Globus believes that she can explain the direction of this dipole from the local large-scale structure plus a model of the Milky Way magnetic field. That's cool! We discussed simulations of the local large-scale structure, and whether they do or can provide reasonable predictions of cosmic-ray production.

2019-02-13

stars, dark matter, TESS

Today at Flatiron, Adrian Price-Whelan (Princeton) gave a very nice talk about using stars to infer things about the dark matter in the Milky Way. He drew a very nice graphical model, connecting dark matter, cosmology, and galaxy formation, and then showed how we can infer properties of the dark matter from stellar kinematics (positions and velocities). As my loyal reader knows, this is one of my favorite philosophical questions. Anyway, he concentrated on stellar streams and showed us some nice results on Ophiucus and GD-1 (some of which have my name on them).

The weekly Stars Meeting at Flatiron was great today. Ben Pope (NYU) showed us some terrific systematic effects in the NASA TESS data, which leads to spurious transit detections if not properly tracked. Some of these probably relate to Earthshine in the detector, about which I hope to learn more when Flatiron people return from the TESS meeting that's on in Baltimore right now.

In that same meeting, Price-Whelan showed us evidence that stars lower on the red-giant branch are more likely to have close binary companions (from APOGEE radial-velocity data). This fits in with an engulfment model (as red giants expand) but it looks like this effect must work out to pretty large orbital radius. Which maybe isn't crazy? Not sure.

And Jason Curtis (Columbia) showed amazing stellar-rotation results from TESS data on a new open cluster that we (Semyeong Oh et al) found in the ESA Gaia data. He can show from a beautifully informative relationship between rotation period and color (like really tight) that the cluster is extremely similar to the Pleiades in age. Really beautiful results. It is clear that gyrochronology works well for young ages (I guess that's a no-brainer) and it is also clear that it is way more precise for groups of stars than individuals. We discussed the possibility that this is evidence for the theoretical idea that star clusters should form in clusters.

2019-02-12

candidate Williamson

Today Marc Williamson (NYU) passed (beautifully, I might say) his PhD Candidacy exam. He is working on the progenitors of core-collapse supernovae, making inferences from post-peak-brightness spectroscopy. He has a number of absolutely excellent results. One is (duh!) that the supernovae types seem to form a continuum, which makes perfect sense, given that we think they come from a continuous process of envelope loss. Another is that the best time to type a supernova with spectroscopy is 10-15 days after maximum light. That's new! His work is based on the kind of machine-learning I love: Linear models and linear support vector machines. I love them because they are convex, (relatively) interpretable, and easy to visualize and check.

One amusing idea that came up is that if the stripped supernova types were not in a continuum, but really distinct types, then it might get really hard to explain. Like really hard. So I proposed that it could be a technosignature! That's a NASA neologism, but you can guess what it means. I discussed this more late in the day with Soledad Villar (NYU) and Adrian Price-Whelan (NYU), with whom we came up with ideas about wisdom signatures and foolishness signatures. See twitter for more.

Also with Villar I worked out a very simple toy problem to think about GANs: Have the data be two-d vectors drawn from a trivial distribution (like a 2-d Gaussian) and have the generator take a one-d gaussian draw and transform it into fake data. We were able to make a strong prediction about how the transform from the one-d to the two-d should look in the generator.

2019-02-11

truly theoretical work on growth of structure

The highlight of my day was a great NYU CCPP Brown-Bag talk by Mikhail Ivanov (NYU) about the one-point pdf of dark-matter density in the Universe, using a modified spherical-collapse model, based on things in this paper. It turns out that you can do a very good job of predicting counts-in-cells or equivalent one-point functions for the dark-matter density by considering the relationship between the linear theory and a non-linearity related to the calculable non-linearity you can work out in spherical collapse. More specifically, his approach is to expand the perturbations in the neighborhood of a point into a monopole term and a sum of radial functions times spherical harmonics. The monopole term acts like spherical collapse and the higher harmonics lead to a multiplicative correction. The whole framework depends on some mathematical properties of gravitational collapse that Ivanov can't prove but seem to be true in simulations. The theory is non-perturbative in the sense that it goes well into non-linear scales, and does well. That's some impressive theory, and it was a beautiful talk.

2019-02-08

How does the disk work?

Jason Hunt (Toronto) was in town today to discuss things dynamical. We discussed various things. I described to him the MySpace project in which Price-Whelan (Princeton) and I are trying to make a data-driven classification of kinematic structures in the thin disk. He described a project in which he is trying to build consistent dynamical models of these structures. He finds that there is no trivial explanation of all the visible structure; probably multiple things are at work. But his models do look very similar to the data qualitatively, so it sure is promising.

2019-02-07

stellar activity cycles

Ben Montet (Chicago) was visiting Flatiron today and gave a very nice talk. He gave a full review of exoplanet discovery science but focused on a few specific things. One of them was stellar activity: He has been using the NASA Kepler full-frame images (which were taken once per month, roughly) to look at precise stellar photometric variations over long timescales (because standard transit techniques filter out the long time scales, and they are hard to recover without full-frame images). He can see stellar activity cycles in many stars, and look at its relationships with things like stellar rotation periods (and hence ages) and so on. He does find relationships! The nice thing is that the NASA TESS Mission produces full-frame images every 30 minutes, so it has way more data relevant to these questions, although it doesn't observe (most of) the sky for very long. All these things are highly relevant to the things I have been thinking about for Terra Hunting Experiment and related projects, a point he made clearly in his talk.

2019-02-06

measuring the Galaxy with HARPS? de-noising

Megan Bedell (Flatiron) was at Yale yesterday; they pointed out that some of the time-variable telluric lines we see in our wobble model of the HARPS data are not telluric at all; they are in fact interstellar medium lines. That got her thinking: Could we measure our velocity with respect to the local ISM using HARPS? The answer is obviously yes, and this could have strong implications for the Milky Way rotation curve! The signal should be a dipolar pattern of RV shifts in interstellar lines as you look around the Sun in celestial coordinates. In the barycentric reference frame, of course.

I also got great news first thing this morning: The idea that Soledad Villar (NYU) and I discussed yesterday about using a generative adversarial network trained on noisy data to de-noise noisy data was a success: It works! Of course, being a mathematician, her reaction was “I think I can prove something!” Mine was: Let's start using it! Probably the mathematical reaction is the better one. If we move on this it will be my first ever real foray into deep learning.

2019-02-05

GANs for denoising; neutrino masses

Tuesdays are light on research! However, I did get a chance today to pitch an idea to Soledad Villar (NYU) about generative adversarial networks (GANs). In Houston a couple of weeks ago she showed results that use a GAN as a regularizer or prior to de-noise noisy data. But she had trained the GAN on noise-free data. I think you could even train this GAN on noisy data, provided that you follow the generator with a noisification step before it hands the output to the discriminator. In principle, the GAN should learn the noise-free model from the noisy data in this case. But I don't understand whether there is enough information. We discussed and planned a few extremely simple experiments to test this.

In the NYU Astro Seminar, Jia Liu (Princeton) spoke about neutrino masses. Right now the best limits on the total neutrino mass is from large-scale structure (although many particle physicists are skeptical because these limits involve the very baroque cosmological model, and not just accelerator and detector physics). Liu has come up with some very clever observables in cosmology (large-scale structure) that could do an even better job of constraining the total neutrino mass. I asked what is now one of my standard questions: If you have a large suite of simulations with different assumptions about neutrinos (she does), and a machinery for writing down permitted observables (no-one has this yet!), you could have a robot decide what are the best observables. That is, you could use brute force instead of cleverness, and you might do much, much better. This is still on my to-do list!

2019-02-04

investigating residuals

The day started with a conversation with Bedell (Flatiron) about projects that arose during or after the Terra Hunting Experiment collaboration meeting last week. We decided to prioritize projects we can do right now, with residuals away from our wobble code fits to HARPS data. The nice thing is that because wobble produces a very accurate generative model of the data, the residuals contain lots of subtle science. For example, covariances between residuals in flux space and local spectral slope expectations (from the model) will point to individual pixel or stitching-block offsets on the focal plane. Or for another, regressions of flux-space residuals against radial-velocity residuals will reveal spectroscopic indicators of spots and plages. There are lots of things to do that are individually publishable but which will also support the THE project.

2019-02-02

Simpson's Paradox, Li, self-calibration

On the plane home from the UK, I worked on three things. The first was a very nice paper by Ivan Minchev (AIP) and Gal Matijevic (AIP) about Simpson's Paradox in Milky Way stellar statistics, like chemodynamics. Simpson's paradox is the point that a trend can have a different sign in a subset of a population than it does in the whole population. The classic example is of two baseball players over two season: In the first season, player A has a higher batting average than player B. And in the second season, player A again has a higher batting average than player B. And yet, overall, player B has a higher average! How is that possible? It works if player A bats far more in one season, and player B bats far more in the other, and they both bat higher in that other season. Anyway, the situation is generic in statistics about stars!

The second thing I worked on was a new paper by Andy Casey (Monash) and company about how red-giant stars get Lithium abundance anomalies. He shows that Li anomalies happen all over the RGB, and even on stars descending the branch (as per asteroseismology) and thus he can show that the Li anomalies are not caused by any particular stellar evolutionary phase. That argues for either planet engulfment or binary-induced convection changes. The former is also disfavored because of the stars descending the RGB. The real triumph is the huge sample of Li-enhanced stars he has found, working with The Cannon and Anna Y. Q. Ho (Caltech). It's a really beautiful use of The Cannon as a spectral synthesis tool.

The third thing I worked on was a plan to self-calibrate HARPS (and equivalent spectrograph) pixel offsets (that is, calibration errors at the pixel level) using the science data from the instrument. That is, you don't need arcs or Fabry–Perot to find these offsets; since they matter to data interpretation, they can be seen in the data directly! I have a plan, and I think it is easy to implement.

2019-02-01

THE Meeting, day 2

Today at the Terra Hunting Experiment meeting, we got deeply into software and calibration issues. The HARPS family of instruments are designed to be extremely stable in all respects, but also monitored by a Fabry–Perot signal imprinted on the detector during the science exposures. The calibration data are taken such that the data obtain absolute calibration information (accuracy) from arc exposures and relative calibration (precision) from F—P data. There was discussion of various replacements for the arcs, including Uranium–Neon lamps, for which the atlas of lines is not yet good enough, and laser-frequency combs, which are not yet reliable.

Another big point of discussion today was the target selection. We (the Experiment) plan to observe a small number (40-ish) stars for a long time (1000-ish individual exposures for each star). The question of how to choose these targets was interesting and contentious. We want the targets to be good for finding planets! But this connects to brightness, right ascension, stellar activity, asteroseismic mode amplitudes, and many other things, none of which we know in advance. How much work do we need to do in early observing to cut down a parent sample to a solid, small sample? And how much can we figure out from public data already available? By the end of the day there was some consensus that we would probably spend the first month or so of the project doing sample-selection observations.

At the end of the day we discussed data-analysis techniques and tellurics and stellar activity. There are a lot of scientific projects we could be doing that would help with extreme-precision radial-velocity measurements. For instance, Suzanne Aigrain (Oxford) showed a toy model of stellar activity which, if correct at zeroth order, would leave an imprint on a regression of stellar spectrum against measured radial velocity. That's worth looking for. The signal will be very weak, but in a typical spectrum we have tens of thousands of pixels, each of which has signal-to-noise of more than 100. And if a linear regression works, it will deliver a linear subspace-projector that just straight-up improves radial-velocity measurements!