2019-02-14

cosmic-ray direction anisotropy

Today I learned from Noemi Globus (NYU) That the Pierre Auger Observatory has an amazing result in cosmic-ray science: The cosmic rays at very high energy do not arrive isotropically at Earth but instead show a significant dipole in their arrival directions. As a reminder: At most energies, the magnetic field of the Galaxy randomizes their directions and they are statistically isotropic at the Earth.

The Auger result isn't new, but it is new to me. And it makes sense: At the very highest energies (above 1019 eV, apparently) the cosmic rays start to propagate more directly through the magnetic field, and preserve some of their original directional information. But the details are interesting, and Globus believes that she can explain the direction of this dipole from the local large-scale structure plus a model of the Milky Way magnetic field. That's cool! We discussed simulations of the local large-scale structure, and whether they do or can provide reasonable predictions of cosmic-ray production.

2019-02-13

stars, dark matter, TESS

Today at Flatiron, Adrian Price-Whelan (Princeton) gave a very nice talk about using stars to infer things about the dark matter in the Milky Way. He drew a very nice graphical model, connecting dark matter, cosmology, and galaxy formation, and then showed how we can infer properties of the dark matter from stellar kinematics (positions and velocities). As my loyal reader knows, this is one of my favorite philosophical questions. Anyway, he concentrated on stellar streams and showed us some nice results on Ophiucus and GD-1 (some of which have my name on them).

The weekly Stars Meeting at Flatiron was great today. Ben Pope (NYU) showed us some terrific systematic effects in the NASA TESS data, which leads to spurious transit detections if not properly tracked. Some of these probably relate to Earthshine in the detector, about which I hope to learn more when Flatiron people return from the TESS meeting that's on in Baltimore right now.

In that same meeting, Price-Whelan showed us evidence that stars lower on the red-giant branch are more likely to have close binary companions (from APOGEE radial-velocity data). This fits in with an engulfment model (as red giants expand) but it looks like this effect must work out to pretty large orbital radius. Which maybe isn't crazy? Not sure.

And Jason Curtis (Columbia) showed amazing stellar-rotation results from TESS data on a new open cluster that we (Semyeong Oh et al) found in the ESA Gaia data. He can show from a beautifully informative relationship between rotation period and color (like really tight) that the cluster is extremely similar to the Pleiades in age. Really beautiful results. It is clear that gyrochronology works well for young ages (I guess that's a no-brainer) and it is also clear that it is way more precise for groups of stars than individuals. We discussed the possibility that this is evidence for the theoretical idea that star clusters should form in clusters.

2019-02-12

candidate Williamson

Today Marc Williamson (NYU) passed (beautifully, I might say) his PhD Candidacy exam. He is working on the progenitors of core-collapse supernovae, making inferences from post-peak-brightness spectroscopy. He has a number of absolutely excellent results. One is (duh!) that the supernovae types seem to form a continuum, which makes perfect sense, given that we think they come from a continuous process of envelope loss. Another is that the best time to type a supernova with spectroscopy is 10-15 days after maximum light. That's new! His work is based on the kind of machine-learning I love: Linear models and linear support vector machines. I love them because they are convex, (relatively) interpretable, and easy to visualize and check.

One amusing idea that came up is that if the stripped supernova types were not in a continuum, but really distinct types, then it might get really hard to explain. Like really hard. So I proposed that it could be a technosignature! That's a NASA neologism, but you can guess what it means. I discussed this more late in the day with Soledad Villar (NYU) and Adrian Price-Whelan (NYU), with whom we came up with ideas about wisdom signatures and foolishness signatures. See twitter for more.

Also with Villar I worked out a very simple toy problem to think about GANs: Have the data be two-d vectors drawn from a trivial distribution (like a 2-d Gaussian) and have the generator take a one-d gaussian draw and transform it into fake data. We were able to make a strong prediction about how the transform from the one-d to the two-d should look in the generator.

2019-02-11

truly theoretical work on growth of structure

The highlight of my day was a great NYU CCPP Brown-Bag talk by Mikhail Ivanov (NYU) about the one-point pdf of dark-matter density in the Universe, using a modified spherical-collapse model, based on things in this paper. It turns out that you can do a very good job of predicting counts-in-cells or equivalent one-point functions for the dark-matter density by considering the relationship between the linear theory and a non-linearity related to the calculable non-linearity you can work out in spherical collapse. More specifically, his approach is to expand the perturbations in the neighborhood of a point into a monopole term and a sum of radial functions times spherical harmonics. The monopole term acts like spherical collapse and the higher harmonics lead to a multiplicative correction. The whole framework depends on some mathematical properties of gravitational collapse that Ivanov can't prove but seem to be true in simulations. The theory is non-perturbative in the sense that it goes well into non-linear scales, and does well. That's some impressive theory, and it was a beautiful talk.

2019-02-08

How does the disk work?

Jason Hunt (Toronto) was in town today to discuss things dynamical. We discussed various things. I described to him the MySpace project in which Price-Whelan (Princeton) and I are trying to make a data-driven classification of kinematic structures in the thin disk. He described a project in which he is trying to build consistent dynamical models of these structures. He finds that there is no trivial explanation of all the visible structure; probably multiple things are at work. But his models do look very similar to the data qualitatively, so it sure is promising.

2019-02-07

stellar activity cycles

Ben Montet (Chicago) was visiting Flatiron today and gave a very nice talk. He gave a full review of exoplanet discovery science but focused on a few specific things. One of them was stellar activity: He has been using the NASA Kepler full-frame images (which were taken once per month, roughly) to look at precise stellar photometric variations over long timescales (because standard transit techniques filter out the long time scales, and they are hard to recover without full-frame images). He can see stellar activity cycles in many stars, and look at its relationships with things like stellar rotation periods (and hence ages) and so on. He does find relationships! The nice thing is that the NASA TESS Mission produces full-frame images every 30 minutes, so it has way more data relevant to these questions, although it doesn't observe (most of) the sky for very long. All these things are highly relevant to the things I have been thinking about for Terra Hunting Experiment and related projects, a point he made clearly in his talk.

2019-02-06

measuring the Galaxy with HARPS? de-noising

Megan Bedell (Flatiron) was at Yale yesterday; they pointed out that some of the time-variable telluric lines we see in our wobble model of the HARPS data are not telluric at all; they are in fact interstellar medium lines. That got her thinking: Could we measure our velocity with respect to the local ISM using HARPS? The answer is obviously yes, and this could have strong implications for the Milky Way rotation curve! The signal should be a dipolar pattern of RV shifts in interstellar lines as you look around the Sun in celestial coordinates. In the barycentric reference frame, of course.

I also got great news first thing this morning: The idea that Soledad Villar (NYU) and I discussed yesterday about using a generative adversarial network trained on noisy data to de-noise noisy data was a success: It works! Of course, being a mathematician, her reaction was “I think I can prove something!” Mine was: Let's start using it! Probably the mathematical reaction is the better one. If we move on this it will be my first ever real foray into deep learning.

2019-02-05

GANs for denoising; neutrino masses

Tuesdays are light on research! However, I did get a chance today to pitch an idea to Soledad Villar (NYU) about generative adversarial networks (GANs). In Houston a couple of weeks ago she showed results that use a GAN as a regularizer or prior to de-noise noisy data. But she had trained the GAN on noise-free data. I think you could even train this GAN on noisy data, provided that you follow the generator with a noisification step before it hands the output to the discriminator. In principle, the GAN should learn the noise-free model from the noisy data in this case. But I don't understand whether there is enough information. We discussed and planned a few extremely simple experiments to test this.

In the NYU Astro Seminar, Jia Liu (Princeton) spoke about neutrino masses. Right now the best limits on the total neutrino mass is from large-scale structure (although many particle physicists are skeptical because these limits involve the very baroque cosmological model, and not just accelerator and detector physics). Liu has come up with some very clever observables in cosmology (large-scale structure) that could do an even better job of constraining the total neutrino mass. I asked what is now one of my standard questions: If you have a large suite of simulations with different assumptions about neutrinos (she does), and a machinery for writing down permitted observables (no-one has this yet!), you could have a robot decide what are the best observables. That is, you could use brute force instead of cleverness, and you might do much, much better. This is still on my to-do list!

2019-02-04

investigating residuals

The day started with a conversation with Bedell (Flatiron) about projects that arose during or after the Terra Hunting Experiment collaboration meeting last week. We decided to prioritize projects we can do right now, with residuals away from our wobble code fits to HARPS data. The nice thing is that because wobble produces a very accurate generative model of the data, the residuals contain lots of subtle science. For example, covariances between residuals in flux space and local spectral slope expectations (from the model) will point to individual pixel or stitching-block offsets on the focal plane. Or for another, regressions of flux-space residuals against radial-velocity residuals will reveal spectroscopic indicators of spots and plages. There are lots of things to do that are individually publishable but which will also support the THE project.

2019-02-02

Simpson's Paradox, Li, self-calibration

On the plane home from the UK, I worked on three things. The first was a very nice paper by Ivan Minchev (AIP) and Gal Matijevic (AIP) about Simpson's Paradox in Milky Way stellar statistics, like chemodynamics. Simpson's paradox is the point that a trend can have a different sign in a subset of a population than it does in the whole population. The classic example is of two baseball players over two season: In the first season, player A has a higher batting average than player B. And in the second season, player A again has a higher batting average than player B. And yet, overall, player B has a higher average! How is that possible? It works if player A bats far more in one season, and player B bats far more in the other, and they both bat higher in that other season. Anyway, the situation is generic in statistics about stars!

The second thing I worked on was a new paper by Andy Casey (Monash) and company about how red-giant stars get Lithium abundance anomalies. He shows that Li anomalies happen all over the RGB, and even on stars descending the branch (as per asteroseismology) and thus he can show that the Li anomalies are not caused by any particular stellar evolutionary phase. That argues for either planet engulfment or binary-induced convection changes. The former is also disfavored because of the stars descending the RGB. The real triumph is the huge sample of Li-enhanced stars he has found, working with The Cannon and Anna Y. Q. Ho (Caltech). It's a really beautiful use of The Cannon as a spectral synthesis tool.

The third thing I worked on was a plan to self-calibrate HARPS (and equivalent spectrograph) pixel offsets (that is, calibration errors at the pixel level) using the science data from the instrument. That is, you don't need arcs or Fabry–Perot to find these offsets; since they matter to data interpretation, they can be seen in the data directly! I have a plan, and I think it is easy to implement.

2019-02-01

THE Meeting, day 2

Today at the Terra Hunting Experiment meeting, we got deeply into software and calibration issues. The HARPS family of instruments are designed to be extremely stable in all respects, but also monitored by a Fabry–Perot signal imprinted on the detector during the science exposures. The calibration data are taken such that the data obtain absolute calibration information (accuracy) from arc exposures and relative calibration (precision) from F—P data. There was discussion of various replacements for the arcs, including Uranium–Neon lamps, for which the atlas of lines is not yet good enough, and laser-frequency combs, which are not yet reliable.

Another big point of discussion today was the target selection. We (the Experiment) plan to observe a small number (40-ish) stars for a long time (1000-ish individual exposures for each star). The question of how to choose these targets was interesting and contentious. We want the targets to be good for finding planets! But this connects to brightness, right ascension, stellar activity, asteroseismic mode amplitudes, and many other things, none of which we know in advance. How much work do we need to do in early observing to cut down a parent sample to a solid, small sample? And how much can we figure out from public data already available? By the end of the day there was some consensus that we would probably spend the first month or so of the project doing sample-selection observations.

At the end of the day we discussed data-analysis techniques and tellurics and stellar activity. There are a lot of scientific projects we could be doing that would help with extreme-precision radial-velocity measurements. For instance, Suzanne Aigrain (Oxford) showed a toy model of stellar activity which, if correct at zeroth order, would leave an imprint on a regression of stellar spectrum against measured radial velocity. That's worth looking for. The signal will be very weak, but in a typical spectrum we have tens of thousands of pixels, each of which has signal-to-noise of more than 100. And if a linear regression works, it will deliver a linear subspace-projector that just straight-up improves radial-velocity measurements!

2019-01-31

THE meeting, day 1

Today was the first day of the Terra Hunting Experiment collaboration meeting. This project is to use HARPS3 for a decade to find Earth-like planets around Sun-like stars. The conversation today was almost entirely about engineering and hardware, which I loved, of course! Many things happened, too many to describe here. One of the themes of the conversation, both in session and out, is that these ultra-precise experiments are truly integrated hardware–software systems. That is, there are deep interactions between hardware and software, and you can't optimally design the hardware without knowing what the software is capable of, and vice versa.

One presentation at the meeting that impressed me deeply was by Richard Hall (Cambridge), who has an experiment to illuminate CCD detectors with a fringe pattern from an interferometer. By sweeping the fringe pattern across the CCD and looking at residuals, he can extremely precisely measure the effective centroid in device coordinates of every pixel center. That is impressive, and it is now known to be one of the leading systematics in extreme precision radial velocity. That is, we can't just assume that the pixels are on a perfect, regular, rectangular grid. I also worked out (roughly) a way that he could do this mapping with the science data, on sky! That is, we could self-calibrate the sub-pixel shifts. This is highly related to things Dustin Lang (Perimeter) and I did for our white paper about post-wheel Kepler.

2019-01-29

dark matter as a latent-variable field

It was a light research day today! But I did get in a conversation with Ana Bonaca (Harvard) about dark matter and information. She has written a nice paper on what information cold stellar streams bring about the gravitational force field or potential in static models of the Galaxy. We have a bit of work on what information perturbed streams (perturbed by a compact dark-matter substructure) bring. We have ideas about how to think about information in the time-dependent case. And on a separate thread, Bonaca has been thinking about what information the stellar kinematics and chemistry bring.

In some sense, the dark matter is the ultimate latent-variable model: The observations interact with the dark matter very weakly (and I'm using “weak” in the non-physics sense), but ultimately everything is driven by the dark matter. A big part of contemporary astrophysics can be seen in this way: We are trying to infer as many properties of this latent-variable field as we can. Because of this structure, I very much like thinking about it all in an information-theoretic way.

2019-01-28

dark photons, resonant planets

At the CCPP Brown-Bag, Josh Ruderman (NYU) spoke about dark photons with a tiny coupling to real photons. The idea (roughly) is that the dark matter produces (by decays or transitions) dark photons, and these have a small coupling to real photons, but importantly the dark photons are not massless. He then showed that there is a huge open parameter space for such models (that is, models not yet ruled out by any observations) that could nonetheless strongly distort the cosmic background radiation at long wavelengths. And, indeed, there are claims of excesses at long wavelengths. So this is an interesting angle for a complex dark sector. My loyal reader knows I am a fan of having complexity in the dark sector.

In the afternoon, I met up with Anu Raghunathan (NYU) to discuss possible starter projects. I pitched a project to look at our ability to find (statistically) exoplanet-transit-like signals in data. I want to understand in detail how much more sensitive we could be to transiting exoplanets in resonant chains than we would be to the individual planets treated independently. There must be a boost here, but I don't know what it is yet.

2019-01-26

SCIMMA workshop, day 2

I officially don't go to meetings on the weekend! That said, I did go to day 2 of a workshop on multi-messenger astrophysics (and, in particular, the field's computing and information infrastructure needs) at Columbia University today. A lot happened, and there were even some fireworks, because there are definitely disagreements among the physicists, the computer scientists, the information scientists, and the high-performance computing experts about what is important, what is hard, and what is in whose domain! I learned a huge amount today, but here are two highlights:

In its current plan (laid out at the meeting by Mario Juric of UW), the LSST project officially doesn't do any scientific analyses; it is only a data source. In this way it is like ESA Gaia. It is trying to do a lot of social engineering to make sure the community organizes good data-analysis and science efforts around the LSST data outputs and APIs. Famously and importantly, it will produce hundreds of thousands to millions of alerts per night, and a lot of the interest is in how to interact with this firehose, especially in multi-messenger, where important things can happen in the first seconds of an astrophysical event.

During Juric's talk, I realized that in order for us to optimally benefit from LSST, we need to know, in advance, where LSST is pointing. Everyone agreed that this will happen (that is, that this feed will exist), and that (relative to the alerts stream) it is a trivial amount of data. I hope this is true. It's important! Because if you are looking for things that happen on the sky, you learn more if you happen to find one that happens inside the LSST field while LSST is looking at it. So maybe looking under the lamp-post is a good idea!

The LCOGT project was represented by Andy Howell (LCOGT). He talked about what they have learned in operating a heterogeneous, global network of telescopes with diverse science goals. He had various excellent insights. One is that scheduling requires very good specification of objectives and good engineering. Another is that openness is critical, and most break-downs are break-downs of communication. Another is that there are ways to structure things to reward generosity among the players. And so on. He talked about LCOGT but he is clearly thinking forward to a future in which networks become extremely heterogeneous and involve many players who do not necessarily all trust one another. That's an interesting limit!

2019-01-25

deep generative models and inference

In my group meeting today, we had a bit of a conversation about deep generative models, inspired by my trip to Math+X Houston. The Villar talk reminded me of the immense opportunity that exists for using deep generative models (like GANs) as part of a bigger inference. I pitched some of my ideas at group meeting.

2019-01-24

Math+X Houston, day 2

Today was day 2 of the 2019 Math+X Symposium on Inverse Problems and Deep Learning in Space Exploration at Rice University in Houston. Again I saw and learned way too much to write in a blog post. Here are some random things:

In a talk about provably or conjectorally effective tricks for optimization, Stan Osher (UCLA) showed some really strange results, like that an operator that (pretty much arbitrarily) smooths the derivatives improves optimization. And the smoothing is in a space where there is no metric or sense of adjacency, so the result is super-weird. But the main takeaway from his talk for me was that we should be doing what he calls “Nesterov” when we do gradient descent. It is like adding in some inertia or momentum to the descent. That wasn't his point! But it was very useful for me.

There was a great talk by Soledad Villar (NYU), who showed some really nice uses of deep generative models (in the form of a GAN, but it could be anything) to de-noise data. This, for a mathematician, is like inference for an astronomer: The GAN (or equivalent) trained on data becomes a prior over new data. This connects strongly to things I have been trying to get started with Gaia data and weak-lensing data! I resolved to find Villar back in NYC in February. She also showed some nice results on constructing continuous deep-learning methods, which don't need to work in a discrete data space. I feel like this might connect to non-parametrics.

In the side action at the meeting, I had some valuable discussions. One of the most interesting was with Katherine de Kleer (Caltech), who has lots of interesting data on Io. She has mapped the surface using occultation, but also just has lots of near-infrared adaptive-optics imaging. She needs to find the volcanoes, and currently does so using human input. We discussed what it would take to replace the humans with a physically motivated generative model. By the way (I learned from de Kleer): The volcanoes are powered by tidal heating, and that heating comes from Io's eccentricity, which is 0.004. Seriously you can tidally heat a moon to continuous volcanism with an eccentricity of 0.004. Crazy Solar System we live in!

In the afternoon, Rob Fergus (NYU) talked about the work we have done on exoplanet direct detection with generative models. And he has done the same (more-or-less repeated our results but with Muandet and Schölkopf) with discriminative models too. That's interesting, because discriminative models are rarely used (or rarely power-used) in astronomy.

2019-01-23

Math+X Houston, day 1

Today was the first day of the 2019 Math+X Symposium on Inverse Problems and Deep Learning in Space Exploration, which is a meeting to bring together mathematicians and domain scientists to discuss problems of mutual interest. I learned a huge amount today! I can't summarize the whole day, so here are just a few things ringing in my brain afterwards:

Sara Seager (MIT) and I both talked about how machine learning helps us in astrophysics. She focused more on using machine learning to speed computation or interpolate or emulate expensive atmospheric retrieval models for exoplanet atmospheres. I focused more on the use of machine learning to model nuisances or structured noise or foregrounds or backgrounds in complex data (focusing on stars).

Taco Cohen (Amsterdam) showed a theory of how to make fully, locally gauge-invariant (what I would call “coordinate free”) deep-learning models. And he gave some examples. Although he implied that the continuous versions of these models are very expensive and impractical, the discretized versions might have great applications in the physical sciences, which we believe truly are gauge-invariant! In some sense he has built a superset of all physical laws. I'd be interested in applying these to things like CMB and 21-cm foregrounds.

Jitendra Malik (Berkeley) gave a nice talk about generative models moving beyond GANs, where he is concerned (like me) with what's called “mode collapse” or the problem that the generator can beat the discriminator without making data that are fully representative of all kinds of real data. He even name-checked the birthday paradox (my favorite of the statistical paradoxes!) as a method for identifying mode collapse. Afterwards Kyle Cranmer (NYU) and I discussed with Malik and various others the possibility that deep generative models could possibly play a role in implicit or likelihood-free inference.

There were many other amazing results, including finding seismic pre-cursors to landslides (Seydoux) and using deep models to control adaptive optics (Nousianinen) and analyses of why deep learning models (which have unimaginable capacity) aren't insanely over-fitting (Zdeborová). On that last point the short answer is: No-one knows! But it is really solidly true. My intuition is that it has something to do with the differences between being convex in the parameter space and being convex in the data space. Not that I'm saying anything is either of those!

2019-01-22

inferring maps; detecting waves

I had a great conversation this morning with Yashar Hezaveh (Flatiron) about problems in gravitational lensing. The lensing map is in principle a linear thing (once you set the nonlinear lensing parameters) which means that it is possible to marginalize out the source plane analytically, in principle, or to apply interesting sparseness or compactness priors. We discussed computational data-analysis ideas.

Prior to that, I had an interesting conversation with Rodrigo Luger (Flatiron) and Dan Foreman-Mackey (Flatiron) about the priors we use when we do stellar-surface or exoplanet-surface modeling (map-making). Most priors that are easy to use enforce smoothness, but our maps might have sharp features (coastlines!). But we more-or-less agreed that more interesting priors are also less computationally tractable. Duh!

At mid-day, Chiara Mingarelli (Flatiron) argued in a great seminar that pulsar timing will make a detection in the next years. Her argument is conservative, so I am very optimistic about this.

2019-01-21

planets around hot stars

My research highlight for the day was a conversation with Ben Pope (NYU) about projects involving hot stars. We have been kicking around various projects and we realized in the call that they really assemble into a whole research program that is both deep and broad:

There are problems related to finding transiting planets around hot stars, which is maybe getting less attention than it should, in part because there are technical challenges (that I think we know how to overcome). And planets found around hot stars might have very good properties for follow-up observations (like transit spectroscopy, for example, and reflected light), and also good prospects for harboring life! (Okay I said it.)

There are problems related to getting stellar ages: Hot stars have lifetimes and evolutionary changes on the same timescales as we think exoplanetary systems evolve dynamically, so there should be great empirical results available here. And hot stars can have reasonable age determinations from rotation periods and post-main-sequence evolution. And we know how to make those age determinations.

And: The hot-star category includes large classes of time-variable, chemically peculiar stars. We now at Flatiron (thanks to Will Farr and Rodrigo Luger) have excellent technology for modeling spectral surface features and variability. These surface maps have the potential to be extremely interesting from a stellar model perspective.

Add to all this the fact that NASA TESS will deliver outrageous numbers of light curves, and spectroscopic facilities and surveys abound. We have a big, rich research program to execute.

2019-01-18

Dr Lukas Henrich

It was an honor and a privilege to serve on the PhD defense committee of Lukas Heinrich (NYU), who has had a huge impact on how particle physicists do data analysis. For one, he has designed and built a system that permits re-use of intermediate data results from the ATLAS experiment in new data analyses, measurements, and searches for new physics. For another, he has figured out how to preserve data analyses and workflows in a reproducible framework using containers. For yet another, he has been central in convincing the ATLAS experiment and CERN more generally to adopt standards for the registration and preservation of data analysis components. And if that's not all, he has structured this so that data analyses can be expressed as modular graphs and modified and re-executed.

I'm not worthy! But in addition to all this, Heinrich is a great example of the idea (that I like to say) that principled data analysis lies at the intersection of theory and hardware: His work on ruling out supersymmetric models using ATLAS data requires a mixture of theoretical and engineering skills and knowledge that he has nailed.

The day was a pleasure, and that isn't just the champagne talking. Congratulations Dr. Heinrich!

2019-01-17

taking the Fourier transform of a triangle?

As my loyal reader knows, Kate Storey-Fisher (NYU) and I are looking at the Landy–Szalay estimator for the correlation function along a number of different axes. Along one axis, we are extending it to estimate a correlation function that is a smooth function of parameters rather than just in hard-edged bins. Along another, we are asking why the correlation function is so slow to compute when the power spectrum is so fast (and they are equivalent!). And along another, we are consulting with Alex Barnett (Flatiron) on the subject of whether we can estimate a correlation function without having a random catalog (which is, typically, 100 times larger than the data, and thus dominates all compute time).

Of course when you get a mathematician involved, strange things often happen. One thing is that Barnett has figured out that the Landy–Szalay estimator appears in literally no other literature other than cosmology! And of course even in cosmology it is only justified in the limit of near-zero, Gaussian fluctuations. That isn't the limit of most correlation-function work being done these days. In the math literature they have different estimators. It's clear we need to build a testbed to check the estimators in realistic situations.

One thing that came up in our discussion with Barnett is that it looks like we don't ever need to make a random catalog! The role that the random catalog plays in the estimation could be played (for many possible estimators) by an auto-correlation of the survey window with itself, which in turn is a trivial function of the Fourier transform of the window function. So instead of delivering, with a survey, a random catalog, we could perhaps just be delivering the Fourier transform of the window function out to some wave number k. That's a strange point!

In the discussion, I thought we might actually have an analytic expression for the Fourier transform of the window function, but I was wrong: It turns out that there aren't analytic expressions for the Fourier transforms of many functions, and in particular the Fourier transform of the characteristic function of a triangle (the function that is unity inside the triangle and zero outside) doesn't have a known form. I was surprised by that.

2019-01-16

MySpace, tagging

Wednesdays at Flatiron are pretty fun! Today Kathryn Johnston (Columbia) convened a discussion of dynamics that was pretty impressive. In that discussion my only contribution was to briefly describe the project that Adrian Price-Whelan (Princeton) and I have started called MySpace. This project is to find (in a data-driven way) the local transformation of velocity as a function of position that makes the local disk velocity structure more coherent over a larger patch of the disk.

At first we thought we were playing around with this idea, but then we realized that it produces an unsupervised, data-driven classification of all the stars: Stars in velocity-space concentrations locally in the disk are either in concentrations that extend in some continuous way over a larger patch of the disk or they do not. And this ties into the origins of the velocity substructure. While I was talking about this, Robyn Sanderson (Flatiron) pointed out that if the substructure is created by resonances or forcing by the bar, there are specific predictions of how the local transformation should look. That's great, because it is a data-driven way of looking at the Milky Way bar. Sanderson also gave us relevant references in the literature.

Late in the day, I wrote down some ideas about how we might tease apart metallicity, age, and kinematics in local samples of stars. The sample of Solar Twins from Megan Bedell (Flatiron) have empirical properties that suggest that a lot of the chemical (element-abundance-ratio) diversity of the stars is strongly related to stellar age. So is there information left for chemical taggging? Maybe not. I tried to write down a framework for asking these questions.

2019-01-15

Dr Kilian Walsh

One of the great pleasures of my job is being involved in the bestowal of PhDs! Today, Kilian Walsh (NYU) defended his PhD, which he did with Jeremy Tinker (NYU). The thesis was about the connections between galaxies and their halos. As my loyal reader knows, I find it amazing that this description of the world works at all, but it works incredibly well.

One of the puzzling results from Walsh's work is that although halos themselves have detailed properties that depend on how, where, and when they assembled their mass, the properties of the galaxies that they contain don't seem to depend on any halo property except the mass itself! So the halos have (say) spin parameters that depend on assembly time, the galaxies don't seem to have properties that depend on halo spin parameter! Or if they do, it's pretty subtle. This subject is called halo assembly bias and galaxy assembly bias; there is plenty of the former and none of the latter. Odd.

Of course the tools used for this are blunt tools, because we don't get to see the halos! But Walsh's work has been about sharpening those tools. (I could joke that he sharpens them from extremely blunt to very blunt!) For example, he figured out how to use the void probability function in combination with clustering to put stronger constraints on halo occupation models.

Congratulations Dr. Walsh!

2019-01-14

out sick

I was out sick today. It was bad, because I was supposed to give the kick-off talk at Novel Ideas for Dark Matter in Princeton.

2019-01-11

what's permitted for target selection?

Because I have been working with Rix (MPIA) to help the new project SDSS-V make plans to choose spectroscopic targets, and also because of work I have been doing with Bedell (Flatiron) on thinking about planning radial-velocity follow-up observations, I find myself saying certain things over and over again about how we are permitted to choose targets if we want it to be easy (and even more importantly, possible) to use the data in a statistical project that, say, determines the population of stars or planets, or, say, measures the structural properties of the Milky Way disk. Whenever I am saying things over and over again, and I don't have a paper to point to, that suggests we write one. So I started conceiving today a paper about selection functions in general, and what you gain and lose by making them more complicated in various ways. And what is not allowed, ever!

2019-01-10

#hackAAS at #aas233

Today was the AAS Hack Together Day, sponsored by the National Science Foundation and by the Moore Foundation, both of which have been very supportive of the research I have done, and both of which are thinking outside the box about how we raise the next generation of scientists! We had a huge group and lots happened. If you want to get a sense of the range and scope of the projects, look at these telegraphic wrap-up slides, which (as always) only paint a partial picture!

We were very fortunate to have Huppenkothen (UW) in the room, and in (literally) five minutes before we started, she put together these slides about hack days. I love that! I think Huppenkothen is the world ambassador and chief philosopher of hacking.

I worked on two hacks. Well really one. The one I didn't really work on was to launch a Mastodon instance. Mastodon is the open-source alternative to Twitter(tm) and has nice features like content warnings (on which you can filter) and community-governable rules and restrictions. I thought it might be fun to try to compete with the big players in social! Although I didn't work on it at all, Dino Bektešević (UW) took over the project and (with a lot of hacking) got it up and running on an AWS instance. It took some hacking because (like many open-source projects) the documentation and tutorials were out of date and filled with version (and other) inconsistencies. But Bektešević (and I by extension) learned a lot!

The hack I actually did (a very tiny, tiny bit of) work on was to write a stellar-binaries-themed science white paper for the Decadal Survey. Katie Breivik (CITA) and Adrian Price-Whelan (Princeton) are leading it. Get in touch with us if you want to help! The point is: Binary stars are a critical part of every science theme for the next decade.

2019-01-09

#AAS233, day 3

I arrived today at #AAS233. I'm here mainly for the Hack Together Day (which is tomorrow), but I did go to some exoplanet talks. One nice example was Molly Kosiarek (UCSC) who talked about a small planet in some K2 data. She fit Gaussian Processes to the K2 light curve and used that to determine kernel parameters for a quasi-periodic stochastic process. She then used those kernel parameters to fit the radial-velocity data to improve her constraints on the planet mass. She writes more in this paper. Her procedure involves quite a few assumptions, but it is cool because it is a kernel-learning problem, and she was explicitly invoking an interesting kind of generalizability (learning on light curve, applying to spectroscopy).

Late in the day I had a conversation with Jonathan Bird (Nashville) about the challenges of getting projects done. And another with Chris Lintott (Oxford) about scientific communication on the web and in the journals.

2019-01-08

reproducing old results

I spent a bit of research time making near-term plans with Storey-Fisher (NYU), who is developing new estimators of clustering statistics. Because clustering is two-point (at least), computational complexity is an issue; she is working on getting things fast. She has had some success; it looks like we are fast enough now. The near-term goals are to reproduce some high-impact results from some Zehavi papers on SDSS data. Then we will have a baseline to beat with our new estimators.

2019-01-07

expected future-discounted discovery rate

My tiny bit of research today was on observation scheduling: I read a new paper by Bellm et al about scheduling wide-field imaging observations for ZTF and LSST. It does a good job of talking about the issues but it doesn't meet my (particular, constrained) needs, in part because Bellm et al are (sensibly) scheduling full nights of observations (that is, not going just-in-time with the scheduling), and they have separate optimizations for volume searched and slew overheads. However, it is highly relevant to what I have been doing. It also had lots of great references that I didn't know about! They also make a strong case for optimizing full nights rather than going just-in-time. I agree that this is better, provided that your conditions aren't changing under you. If they are changing under you, you can't really plan ahead. Interesting set of issues, and something that differentiates imaging-survey scheduling from spectroscopic follow-up scheduling.

I also did some work comparing expected information gain to expected discovery rate. One issue with information gain is that if it isn't information gain in this exposure (and it isn't, because we have to look ahead), then it is hard to write down the information gain, because it depends strongly on future decisions (for example, if we decide to stop observing the source entirely!). So I am leaning towards making my first contribution on this subject be about discovery rate.

Expected future-discounted discovery rate, that is.

2019-01-06

target selection

On the weekend, Rix (MPIA) and I got in a call to discuss the target selection for SDSS-V, which is a future survey to measure multi-epoch spectroscopy for (potentially) millions of stars. The issue is that we have many stellar targeting categories, and Rix and my view is that targeting should be based only on the measured properties of stars in a small set of public, versioned photometric and astrometric catalogs.

This might not sound like a hard constraint, but it is: It means you can't use all the things we know about the stars to select them. That seems crazy to many of our colleagues: Aren't you wasting telescope time if you observe things that you could have known, from existing observations, was not in the desired category? That is, if you require that selection be done from a certain set of public information sources, you are ensuring an efficiency hit.

But that is compensated—way more than compensated—by the point that the target selection will be understandable, repeatable, and simulate-able. That is, the more automatic the target selection it is, from simple inputs, the easier it is to do populations analyses, statistical analyses, and simulate the survey (or what the survey would have done in a different galaxy). See, for example, cosmology: The incredibly precise measurements in cosmology have been made possible by performing simple, inefficient, but easy-to-understand-and-model selection functions. And, indeed: When the selection functions get crazy (as they do in SDSS-III quasar target selection, with which I was involved), the data become very hard to use (the clustering of those quasars on large scales can never be known extremely precisely).

Side note: This problem has been disastrous for radial-velocity surveys for planets, because in most cases, the observation planning has been done by people in a room, talking. That's extremely hard to model in a data analysis.

Rix and I also discussed a couple of subtleties. One is that not only should the selection be based on public surveys, it really should be based only on the measurements from those surveys, and not the uncertainties or error estimates. This is in part because the uncertainties are rarely known correctly, and in part because the uncertainties are a property of the survey, not the Universe! But this is a subtlety. Another subtlety is that we might not just want target lists, we might want priorities. Can we easily model a survey built on target priorities rather than target selection? I think so, but I haven't faced that yet in my statistical work.

2019-01-04

refereeing, selecting, and planning

I don't think I have done good job of writing the rules for this blog, because I don't get to count refereeing. Really, refereeing papers is a big job and it really is research, since it sometimes involves a lot of literature work or calculation. I worked on some refereeing projects today for a large part of the day. Not research? Hmmm.

Also not counting as research: I worked on the Gaia Sprint participant selection. This is a hard problem because everyone who applied would be a good participant! As part of this, I worked on demographic statistics of the applicant pool and the possibly selected participants. I hope to be sending out emails next week (apologies to those who are waiting for us to respond!).

Late in the day I had a nice conversation with Stephen Feeney (Flatiron) about his upcoming seminar at Toronto. How do different aspects of data analysis relate? And how do the different scientific targets of that data analysis relate? And how to tell the audience what they want to know about the science, the methods, and the speaker. I am a big believer that a talk you give should communicate things about yourself and not just the Universe. Papers are about the Universe, talks are about you. That's why we invited you!

2019-01-03

the limits of wobble

The day was pretty-much lost to non-research in the form of project management tasks and refereeing and hiring and related. But I did get in a good conversation with Bedell (Flatiron) with Luger (Flatiron) and Foreman-Mackey (Flatiron) about the hyper-parameter optimization in our new wobble code. It requires some hand-holding, and if Bedell is going to “run on everything” as she intends to this month, it needs to be very robust and hands-free. We discussed for a bit and decided that she should just set the hyper-parameters to values we know are pretty reasonable right now and just run on everything, and we should only reconsider this question after we have a bunch of cases in hand to look at and understand. All this relates to the point that although we know that wobble works incredibly well on the data we have run it on, we don't currently know its limits in terms of signal-to-noise, number of epochs, phase coverage in the barycentric year, and stellar temperature.

2019-01-02

finished a paper!

It was a great day at Flatiron today! Megan Bedell (Flatiron) finished her paper on wobble. This paper is both about a method for building a data-driven model for high-resolution spectral observations of stars (for the purposes of making extremely precise radial-velocity measurements), and about an open-source code that implements the model. One of the things we did today before submission is discuss the distinction between a software paper and a methods paper, and then we audited the text to make sure that we are making good software/method distinctions.

Another thing that came up in our finishing-up work was the idea of an approximation: As I like to say, once you have specified your assumptions or approximations with sufficient precision, there is only one method to implement. That is, there isn't an optimal method! There is only the method, conditioned on assumptions. But now the question is: What is the epistemological status of the assumptions? I think the assumptions are just choices we make in order to specify the method! That is, when we treat the noise as Gaussian, it is not a claim that the noise is truly Gaussian! It is a claim that we can treat it as Gaussian and still get good and useful results. Once again, my pragmatism. We audited a bit for this kind of language too.

We submitted the paper to the AAS Journals and to arXiv. Look for it on Thursday night (US time) or Friday morning!