2017-12-22

testing tacit knowledge about measurements

In astrometry, there is folk knowledge (tacit knowledge?) that the (best possible) uncertainty you can obtain on any measurement of the centroid of a star in an image is proportional to the with the size (radius or diameter or FWHM) of the point-spread function, and inversely proportional to the signal-to-noise ratio with which the star is detected in the imaging. This makes sense: The sharper a star is, the more precisely you can measure it (provided you are well sampled and so on), and the more data you have, the better you do. These are (as my loyal reader knows) Cramer–Rao bounds. And related directly to Fisher information.

Oddly, in spectroscopy, there is folk knowledge that the best possible uncertainty you can obtain on the radial-velocity of a star is proportional to the square-root of the width (FWHM) of the spectral lines in the spectrum. I was suspicious, but Bedell (Flatiron) demonstrated this today with simulated data. It's true! I was about to resign my job and give up, when we realized that the difference is that the spectroscopists don't keep signal-to-noise fixed when they vary the line widths! They keep the contrast fixed, and the contrast appears to be the depth of the line (or lines) at maximum depth, in a continuum-normalized spectrum.

This all makes sense and is consistent, but my main research event today was to be hella confused.

2017-12-21

optimization is hard

Megan Bedell (Flatiron) and I worked on optimization for our radial-velocity measurement pipeline. We did some experimental-coding on scipy optimization routines (which are not documented quite as well as I would like), and we played with our own home-built gradient-descent. It was a roller-coaster, but we still get some unexpected behaviors. Bugs clearly remain, which is good, actually,
because it means that we can only do better than how we are doing now, which is pretty good.

2017-12-20

if your sample isn't contaminated, you aren't trying hard enough

At Gaia DR2 parallel-working meeting, Adrian Price-Whelan (Princeton) and I discussed co-moving stars with Jeff Andrews (Crete). Our discussion was inspired by the fact that Andrews has written some pretty strongly worded critical things about our own work with Semyeong Oh (Princeton). We clarified that there are three (or maybe four) different things you might want to be looking for: stars that have the same velocities (co-moving stars), stars that are dynamically bound (binaries), or stars that were born together (co-eval) or have the same birthplace or same abundances or same ages etc.. In the end we agreed that different catalogs might be made with different goals in mind, and different tolerances for completeness and purity. But one thing I insisted on (and perhaps pretty strongly) is that you can't have high completeness without taking on low purity. That is, you have to take on contamination if you want to sample the full distribution.

This is related to a much larger point: If you want a pure and complete sample, you have to cut your data extremely hard. Anyone who has a sample of anything that is both pure and complete is either missing large fractions of the population they care about, or else is spending way too much telescope time per object. In any real, sensible sample of anything in astronomy that is complete, we are going to have contamination. And any models or interpretation we make of the sample must take that contamination into account. Any astronomers who are unwilling to live with contamination are deciding not to use our resources to their fullest, and that's irresponsible, given their preciousness and their expense.

2017-12-19

latent variable models: What's the point?

The only research time today was a call with Rix (MPIA) and Eilers (MPIA) about data-driven models of stars. The Eilers project is to determine the stellar luminosities from the stellar spectra, and to do so accurately enough that we can do Milky-Way mapping. And no, Gaia won't be precise enough for what we need. Right now Eilers is comparing three data-driven methods. The first is a straw man,
which is nearest-neighbor! Always a crowd-pleaser, and easy. The second is The Cannon, which is a regression, but fitting the data as a function of labels. That is, it involves optimizing a likelihood. The third is the GPLVM (or a modification thereof) where both the data and the labels are a nonlinear function of some uninterpretable latent variables.

We spent some of our time talking about exactly what are the benefits of going to a latent-variable model over the straight regression. We need benefits, because the latent-variable model is far more computationally challenging. Here are some benefits:

The regression requires that you have a complete set of labels. Complete in two senses. The first is that the label set is sufficient to explain the spectral variability. If it isn't, the regression won't be precise. It also needs to be complete in the sense that every star in the training set has every label known. That is, you can't live with missing labels. Both of these are solved simply in the latent-variable model. The regression also requires that you not have an over-complete set of labels. Imagine that you have label A and label B and a label that is effectively A+B. This will lead to singularities in the regression. But no problem for a latent-variable model. In the latent-variable model, all data and all known labels are generated as functions (nonlinear functions drawn from a Gaussian process, in our case) of the latent variables. And those functions can generate any and all data and labels we throw at them. Another (and not unrelated) advantage is in the latent-variable formulation is that we can have a function space for the spectra that is higher (or lower) dimensionality than the label space, which can cover variance that isn't label-related.

Finally, the latent-variable model has the causal structure that most represents how stars really are: That is, we think star properties are set by some unobserved physical properties (relating to mass, age, composition, angular momentum, dynamo, convection, and so on) and the emerging spectrum and other properties are set by those intrinsic physical properties!

One interesting thing about all this (and brought up to me by Foreman-Mackey last week) is that the latent-variable aspect of the model and the Gaussian-process aspect of the model are completely independent. We can get all of the (above) advantages of being latent-variable without the heavy-weight Gaussian process under the hood. That's interesting.

2017-12-18

playing with stellar spectra; dimensionality

In another low-research day, I did get in a tiny bit of work time with Bedell (Flatiron). We did two things: In the first, we fit each of her Solar twins as a linear combination of other Solar twins. Then we looked for spectral deviations. It looks like we find stellar activity in the residuals. What else will we find?

In the second thing we did, we worked through all our open threads, and figured out what are the next steps, and assigned tasks. Some of these are writing tasks, some of these are coding tasks, and some are thinking tasks. The biggest task I am assigned—and this is also something Rix (MPIA) is asking me to do—is to write down a well-posed procedure for deciding what the dimensionality is of a low-dimensional data set in a high-dimensional space. I don't like the existing solutions in the literature, but as Rix likes to remind me: I have to put up or shut up!

2017-12-15

batman and technetium

Today was a low-research day (letters of recommendation), but Elisabeth Andersson (NYU) and I got an optimization working, comparing a batman periodic transit model to a Kepler light curve. I left her with the problem of characterizing the six planets in the Trappist 1 system.

At lunch, Foreman-Mackey (Flatiron) proposed a model for stellar spectra that is intermediate in sophistication and computational complexity between The Cannon and the Eilers (MPIA) GPLVM. He also has a fast implementation in TensorFlow. Most of the TensorFlow speed-up comes from its clever use of GPUs. Late in the day, Bedell proposed that we look for short-lived radioactive isotopes in her Solar twins. That’s a great idea!

2017-12-14

Fisher matrix manipulations

Not much happened today, research-wise. But one great thing was a short call with Ana Bonaca, in which we reviewed what we are doing with our Fisher matrices. We are doing the right things! There are two operations you want to do: Change variables, and marginalize out nuisances. These look pretty different. That is, if you just naively change variables to a single variable, and don't marginalize out anything, the operation is an outer product, with the Fisher matrix as metric, but it is equivalent to assuming that all else is fixed. That is, it slices your likelihood. This is not usually the conservative move, which is to either marginalize (if you are Bayesian) or profile (if you are frequentist). These operations involve inverses of Fisher matrices. Some of the relevant details are in section 2 of this useful paper.

2017-12-13

quality time with the iPTA

The research highlight of the day was a couple of hours spent with the iPTA data analysis collaboration. Justin Ellis (WVU) led an overview and extremely interactive discussion of their likelihood function, which they use to detect the gravitational radiation stochastic background from pulsar timing, in the presence of systematic nuisances. These include time-variable dispersion measure, red noise, accelerations and spin-down, receiver and backend calibrations, ephemeris issues, and more! The great cleverness is that they linearize and apply Gaussian priors, so they can make use of all the beautiful linear algebra that my loyal reader hears so much about. The likelihood function is a thing of beauty, and computationally tractable. They asked me for advice, but frankly, I’m not worthy.

2017-12-12

objective Bayes; really old data

Today was day two with the Galactic Center Group at UCLA. Again, a huge argument about priors broke out. As my loyal reader knows, I am a subjective Bayesian, not an objective Bayesian. Or more correctly “I don't always adopt Bayes, but when I do, I adopt subjective Bayes!” But the argument was about the best way to set objective-Bayes priors. My position is that you can't set them in the space of your parameters, because your parameterization itself is subjective. So you have to set them in the space of your data. That's exactly what the Galactic Center Group at UCLA is doing, and they can show that it gives them much better results (in terms of bias and coverage) than setting the priors in dumber “flat” ways (which is standard in the relevant literature).

One incredible thing about the work of this group is that they are still using, and still re-reducing, imaging data taken in the 1990s! That means that they are an amazing example of curation and preservation of data and reproducibility and workflow and etc. For this reason, there were information scientists at the meeting this week. It is an interesting consideration when thinking about how a telescope facility is going to be used: Will your data still be interesting 22 years from now? In the case of the Galactic Center, the answer turns out to be a resounding yes.

2017-12-11

Galactic Center review

I spent the day at UCLA, reviewing the data-analysis work of the Galactic Center Group there, for reporting to the Keck Foundation. It was a great day on a great project. They have collected large amounts of data (for more than 20 years!), both imaging and spectroscopy, to tie down the orbits of the stars near the Galactic Center black hole, and also to tie down the Newtonian reference frame. The approach is to process imaging and spectroscopy into astrometric and kinematic measurements, and then fit those measurements with a physical model. Among the highlights of the day were arguments about priors on orbital parameters, and descriptions of post-Newtonian terms that matter if you want to test General Relativity. Or test for the presence of dark matter concentrated at the center of the Galaxy.

2017-12-10

the assumptions underlying EPRV

The conversation on Friday with Cisewski and Bedell got me thinking all weekend. It appears that the problem of precise RV difference measurement becomes ill-posed once we permit the stellar spectrum to vary with time. I felt like I nearly had a breakthrough on this today. Let me start by backing up.

It is impossible to obtain exceedingly precise absolute radial velocities (RVs) of stars, because to get an absolute RV, you need a spectral model that puts the centroids of the absorption lines in precisely the correct locations. Right now physical models of convecting photospheres have imperfections that lead to small systematic differences in line shapes, depths, and locations between the models of stars and the observations of stars. Opinions vary, but most astronomers would agree that this limits absolute RV accuracy at the 0.3-ish km/s level (not m/s level, km/s level).

How is it, then, that we measure at the m/s level with extreme-precision RV (EPRV) projects? The answer is that as long as the stellar spectrum doesn't change with time, we can measure relative velocity changes to arbitrary accuracy! That has been an incredibly productive realization, leading as it did to the discovery, confirmation, or characterization of many hundreds of planets around other stars!

The issue is: Stellar spectra do change with time! There is activity, and also turbulent convection, and also rotation. This puts a long-term wrench in the long-term EPRV plans. It might even partially explain why current EPRV projects never beat m/s accuracy, even when the data (on the face of it) seem good enough to do better. Now the question is: Do the time variations of stellar spectra put an absolute floor on relative-RV measurement? That is, do they limit ultimate precision?

I think the answer is no. But the Right Thing To Do (tm) might be hard. It will involve making some new assumptions. No longer will we assume that the stellar spectrum is constant with time. But we will have to assume that spectral variations are somehow uncorrelated (in the long run) with exoplanet phase. We might also have to assume that the exoplanet-induced RV variations are dynamically predictable. Time to work out exactly what we need to assume and how.

2017-12-08

all about radial velocities

The day started with a conversation among Stuermer (Chicago), Montet (Chicago), Bedell (Flatiron), and me about the problem of deriving radial velocities from two-d spectroscopic images rather than going through one-d extractions. We tried to find scope for a minimal paper on the subject.

The day ended with a great talk by Jessi Cisewski (Yale) about topological data analysis. She finally convinced me that there is some there there. I asked about using automation to find best statistics, and she agreed that it must be possible. Afterwards, Ben Wandelt (Paris) told me he has a nearly-finished project on this very subject. Before Cisewski's talk, she spoke to Bedell and me about our EPRV plans. That conversation got me concerned about the non-identifiability of radial velocity if you let the stellar spectrum vary with time. Hmm.

2017-12-07

what's the circular acceleration?

Ana Bonaca (Harvard) and I started the day with a discussion that was in part about how to present our enormous, combinatoric range of results we have created with our information-theory project. One tiny point there: How do you define the equivalent of the circular velocity in a non-axi-symmetric potential? There is no clear answer. One is to do something relating to averaging the acceleration around a circular ring. Another is to use v2/R locally. Another is to use that locally, but on the radial component of the acceleration.

While I was proctoring an exam, Megan Bedell (Flatiron) wrote me to say that our one-d, data-driven spectroscopic RV extraction code is now performing almost as well as the HARPS pipeline, on real data. That's exciting. We had a short conversation about extending our analysis to more stars to make the point better. We believe that our special sauce is our treatment of the tellurics, but we are not yet certain of this.

2017-12-06

Gaia-based training data, GANs, and optical interferometry

In today's Gaia DR2 working meeting, I worked with Christina Eilers (MPIA) to build the APOGEE+TGAS training set we could use to train her post-Cannon model of stellar spectra. The important idea behind the new model is that we are no longer trying to specify the latent parameters that control the spectral generation; we are using uninterpreted latents. For this reason, we don't need complete labels (or any labels!) for the training set. That means we can train on, and predict, any labels or label subset we like. We are going to use absolute magnitude, and thereby put distances onto all APOGEE giants. And thereby map the Milky Way!

In stars group meeting, Richard Galvez (NYU) started a lively discussion by showing how generative adversarial networks work and giving some impressive examples on astronomical imaging data. This led into some good discussion about uses and abuses of complex machine-learning methods in astrophysics.

Also in stars meeting, Oliver Pfuhl (MPA) described to us how the VLT four-telescope interferometric imager GRAVITY works. It is a tremendously difficult technical problem to perform interferometric imaging in the optical: You have to keep everything aligned in real time to a tiny fraction of a micron, and you have little carts with mirrors zipping down tunnels at substantial speeds! The instrument is incredibly impressive: It is performing milli-arcsecond astrometry of the Galactic Center, and it can see star S2 move on a weekly basis!.

2017-12-05

purely geometric spectroscopic parallaxes

Today was a low research day; it got cut short. But Eilers made progress on the semi-supervised GPLVM model we have been working on. One thing we have been batting around is scope for this paper. Scope is challenging, because the GPLVM is not going to be high performance for big problems. Today we conceived a scope that is a purely geometric spectroscopic parallax method. That is, a spectroscopic parallax method (inferring distances from spectra) that makes no use of stellar physical models whatsoever, not even in training!

2017-12-04

Spitzer death; nearest neighbors

Today was spent at the Spitzer Science Center for the 39th meeting of the Oversight Committee, on which I have served since 2008. This meeting was just like every other: I learned a huge amount! This time about how the mission comes to a final end, with the exercise of various un-exercised mechanisms, and then the expenditure of all propellants and batteries. We discussed also the plans for the final proposal call, and the fitness of the observatory to observe way beyond its final day. On that latter note: We learned that NASA will transfer operations of Spitzer to a third party, for about a million USD per month. That's an interesting opportunity for someone. Or some consortium.

In unrelated news, Christina Eilers (MPIA) executed a very simple (but unprecedented) idea today: She asked what would happen with a data-driven model of stellar spectra (APOGEE data) if the model is simply nearest neighbor: That is, if each test-set object is given the labels of its nearest (in a chi-squared sense) training-set object. The answer is impressive: the nearest-neighbor method is only slightly worse than the quadratic data-driven model known as The Cannon. This all relates to the point that most machine-learning methods are—in some sense—nearest-neighbor methods!

2017-12-01

seeing giants shrink in real time? the dark matter

At parallel-working session in my office at NYU, I worked with Lauren Blackburn (TransPerfect) to specify a project on clustering and classification of red-giant asteroseismic spectra. The idea (from Tim Bedding's group at Sydney) is to distinguish the stars that are going up the red-giant branch from the ones coming down. Blackburn asked if we could just see the spectra change with time for the stars coming down. I said “hell no” and then we wondered: Maybe?. That's not the plan, but we certainly should check that!

In the NYU Astro Seminar, Vera Glusevic (IAS) gave a great talk on inferring the physical properties of the dark matter (that is, not just the mass and cross-section, but real interaction parameters in natural models. She has results that combinations of different direct-detection targets, being differently sensitive to spin-dependent interactions, could be very discriminatory. But she did have to assume large cross sections, so her results are technically optimistic. She then blew us away with strong limits on dark-matter models using the CMB (and the dragging of nuclei by dark-matter particles in the early universe). Great, and ruling out some locally popular models!

Late in the day, Bedell and I did a writing workshop on our EPRV paper. We got a tiny bit done, which should be called not “tiny” but really a significant achievement. Writing is hard.

2017-11-30

so much Gaussian processes

The day was all GPs. Markus Bonse (Darmstadt) showed various of us very promising GPLVM results for spectra, where he is constraining part of the (usually unobserved) latent space to look like the label space (like stellar parameters). This fits into the set of things we are doing to enrich the causal structure of existing machine-learning methods, to make them more generalizable and interpretable. In the afternoon, Dan Foreman-Mackey (Flatiron) found substantial issues with GP code written by me and Christina Eilers (MPIA), causing Eilers and I to have to re-derive and re-write some analytic derivatives. That hurt!
Especially since the derivatives involve some hand-coded sparse linear algebra. But right at the end of the day (like with 90 seconds to spare), we got the new derivatives working in the fixed code. Feelings were triumphant.

2017-11-29

what's our special sauce? and Schwarzshild modeling

My day started with Dan Foreman-Mackey (Flatiron) smacking me down about my position that it is causal structure that makes our data analyses and inferences good. The context is: Why don't we just turn on the machine learning (like convnets and GANs and etc). My position is: We need to make models that have correct causal structure (like noise sources and commonality of nuisances and so on). But his position is that, fundamentally, it is because we control model complexity well (which is hard to do with extreme machine-learning methods) and we have a likelihood function: We can compute a probability in the space of the data. This gets back to old philosophical arguments that have circulated around my group for years. Frankly, I am confused.

In our Gaia DR2 prep meeting, I had a long conversation with Wyn Evans (Cambridge) about detecting and characterizing halo substructure with a Schwarzschild model. I laid out a possible plan (pictured below). It involves some huge numbers, so I need some clever data structures to trim the tree before we compute 1020 data–model comparisons!

Late in the day, I worked with Christina Eilers (MPIA) to speed up her numpy code. We got a factor of 40! (Primarily by capitalizing on sparseness of some operators to make the math faster.)


2017-11-28

empirical yields; galaxy alignments; linear algebra foo.

Early in the day, Kathryn Johnston (Columbia) convened the local Local Group group (yes, I wrote that right) at Columbia. We had presentations from various directions (and I could only be at half of the day). Subjective highlights for me included the following: Andrew Emerick (Columbia) showed that there is a strong prediction that in dwarf galaxies, AGB-star yields will be differently distributed than supernovae yields. That should be observable, and might be an important input to my life-long goal of deriving nucleosynthetic yields from the data (rather than theory). Wyn Evans (Cambridge) showed that you can measure some statistics of the alignments of dwarf satellite galaxies with respect to their primary-galaxy hosts, using the comparison of the Milky Way and M31. M31 is more constraining, because we aren't sitting near the center of it! These alignments appear to have the right sign (but maybe the wrong amplitude?) to match theoretical predictions.

Late in the day, Christina Eilers (MPIA) showed up and we discussed with Dan Foreman-Mackey (Flatiron) our code issues. He noted that we are doing the same linear-algebra operations (or very similar ones) over and over again. We should not use solve, but rather use cho_factor and then cho_solve that permits fast operation given the pre-existing factorization. He also pointed out that in the places where we have missing data, the factorization can be updated in a fast way rather than fully re-computed. Those are good ideas! As I often like to say, many of my super-powers boil down to just knowing who to ask about linear algebra.

2017-11-27

self-calibration for EPRV; visualizations of the halo

The morning started with Bedell and Foreman-Mackey and me devising a self-calibration approach to combining the individual-order radial velocities we are getting for the different orders at the different epochs for a particular star in the HARPS archive. We need inverse variances for weighting in the fit, so we got those too. The velocity-combination model is just like the uber-calibration of the SDSS imaging we did so many years ago. We discussed optimization vs marginalization of nuisances, and decided that the data are going to be good enough that probably it doesn't matter which we do. I have to think about whether we have a think-o there.

After that, I worked with Anderson and Belokurov on finding kinematic (phase-space) halo substructure in fake data, in SDSS, and in Gaia DR2. We have been looking at proper motions, because for halo stars, these are better measured than parallaxes! Anderson made some great visualizations of the proper-motion distribution in sky (celestial-coordinate) pixels. Today she made some visualizations of celestial-coordinate distribution in proper-motion pixels. I am betting this latter approach will be more productive. However, Belokurov and I switched roles today, with me arguing for “visualize first, think later” and him arguing for making sensible metrics or models for measuring overdensity significances.

Andy Casey (Monash) is in town! I had a speedy conversation with him about calibration, classification, asteroseismology, and The Cannon.

2017-11-22

finding and characterizing halo streams in Gaia

Our weekly Gaia DR2 prep meeting once again got us into long arguments about substructure in the Milky Way halo, how to find it and how to characterize it. Wyn Evans (Cambridge) showed that when he looks at halo substructures he has found in terms of actions, they show larger spreads in action in some potentials and smaller in others. Will this lead to constraints on dynamics? Robyn Sanderson (Caltech) thinks so, and so did everyone in the room. Kathryn Johnston (Columbia) and I worked through some ideas for empirical or quasi-empirical stream finding in the data space, some of them inspired by the Schwarzschild-style modeling suggested by Sanderson in my office last Friday. And Lauren Anderson showed plots of Gaia expectations for substructure from simulations, visualized in the data space. We discussed many other things!

2017-11-21

gradients in cosmological power, and EPRV

In the morning, Kate Storey-Fisher (NYU) dropped by to discuss our projects on finding anomalies in the large-scale structure. We discussed the use of mocks to build code that will serve as a pre-registration of hypotheses before we test them. We also looked at a few different kinds of anomalies for which we could easily search. One thing we came up with is a generalization of the real-space two-point function estimators currently used in large-scale structure into estimators not just of the correlation function, but also its gradient with respect to spatial position. That is, we could detect arbitrary generalizations of the hemispheric asymmetry seen in Planck but in a large-scale structure survey, and with any scale-dependence (or different gradients at different scales). Our estimator is related to the concept of marked correlation functions, I think.

Late in the day, Bedell (Flatiron), Montet (Chicago), and Foreman-Mackey (Flatiron) showed great progress on measuring RVs for stars in high-resolution spectroscopy. Their innovation is to simultaneously fit all velocities, a stellar spectrum, and a telluric spectrum, all data-driven. The method scales well (linearly with data size) and seems to suggest that we might beat the m/s barrier in measuring RVs. This hasn't been demonstrated, but the day ended with great hopes. We have been working on this model for weeks or months (depending on how you count) but today all the pieces came together. And it easily generalizes to include various kinds of variability.

2017-11-20

things are looking good

I had early-morning chats with Ana Bonaca (Harvard), who has very nice sanity checks showing that our Fisher analysis (Cramér–Rao analysis) is delivering sensible constraints on the Milky-Way potential, and Christina Eilers (MPIA), who is getting sensible results out of her novel modification of the GPLVM for stellar spectra. After that, I took the rest of the day off for my health.

2017-11-17

refactoring, seminar technique, search by modeling

In parallel working session this morning (where collaborators gather in my office to work together), Montet, Bedell, and I worked out a re-factor of the RV code they have been working on, in order to make it more efficient and easier to maintain. That looked briefly like a big headache and challenge, but in the end the re-factor got completely done today. Somehow it is brutal to consider a refactor, but in the end it is almost always a good idea (and much easier than expected). I'm one to talk: I don't write much code directly myself these days.

Sarah Pearson (Columbia) gave the NYU Astro Seminar today. It was an excellent talk on what we learn about the Milky Way from stellar streams. She did exactly the right thing of spending more than half of the talk on necessary context, before describing her own results. She got the level of this context just right for the audience, so by the time she was talking about what she has done (which involves chaos on the one hand, and perturbations from the bar on the other), it was comprehensible and relevant for everyone. I wish I could boil down “good talk structure” to some simple points, but I feel like it is very context-dependent. Of course one thing that's great about the NYU Astro Seminar is that we are an interactive audience, so the speaker knows where the audience is.

After lunch I had a great and too-short discussion with Robyn Sanderson (Caltech), continuing ideas that came up on Wednesday about search for halo substructure. We discussed the point that when you transform the data to something like action space (or indeed do any non-linear transformation of the data), the measurement uncertainties become crazy and almost impossible to marginalize or even visualize. Let alone account for properly in a scientific analysis. So then we discussed whether we could search for substructure by transforming orbits into the data space and associating data with orbits, in the space where the data uncertainties are simple. As Sanderson pointed out, that's Schwarzschild modeling. Might be a great idea for substructure search.

2017-11-16

theory of anomalies

Today was a low-research day, because [reality]. However, Kate Storey-Fisher (NYU) and I had a great discussion with Josh Ruderman (NYU) about anomalies in the LSS. As my loyal reader knows, we are looking at constructing a statistically valid, safe search for deviations from the cosmological model in the large-scale structure. That search is going to focus towards the overlap (if there is any overlap) between anomalies that are safe to systematic problems with the data (that is, anomalies that can't be mocked by reasonable adjustments to our beliefs about our selection function) and anomalies that live in spaces suggested or predicted by theoretical ideas about non-standard cosmological theories. In particular, we are imagining theories that have the dark sector do interesting things at late times. We didn't make concrete plans in this meeting, except to read down literatures about late decays of the dark matter, dark radiation, and other kinds of dark–dark interactions that could be happening in the current era.

2017-11-15

actions or observables? forbidden planet radii

The highlight today of our Gaia DR2 prep meeting was a plenary argument (recall that this meeting is supposed to be parallel working, not plenary discussion, at least not mainly) about how to find halo substructure in the data. Belokurov (Cambridge) and Evans (Cambridge) showed some nice results of searching for substructure in something close to the raw data. We argued about the value of transforming to a space of invariants. The invariants are awesome, because clustering is long-lived and stark there. But clustering is terrible because (a) it introduces unnecessarily wrong assumptions into the problem and (b) normal uncertainties in the data space become arbitraily ugly noodles in the action space. We discussed whether there are intermediate approaches, that get the good things about working in observables, without too many of the bad things of working in the actions. We didn't make specific plans, but many good ideas hit the board.

Stars group meeting contained too many results to describe them all! It was great, and busy. But the stand-out result for me (and this is just me!) was a beautiful result by Vincent Van Eylen (Leiden) on exoplanet radii. As my loyal reader knows, the most common kinds of planets are not Earths or Neptunes, but something in-between, variously called super-Earths and mini-Neptunes. Now it turns out that even this class bifurcates, with a bimodal distribution—there really is a difference between super-Earths and mini-Neptunes, and little in between. Now Van Eylen shows that this gap really looks like it goes exactly to zero: There is a range of planet radii that really don't exist in the world. Note to reader: This effect probably depends on host star and many other things, but it is incredibly clear in this particular sample. Cool thing: The forbidden radii are a function of radius, and the forbidden zone was (loosely) predicted before it was observed. Just incredible. Van Eylen's super-power: Revision of asteroseismic stellar radii to get much more precision on stars and therefore on the transiting planets they host. What a result.

2017-11-14

you never really understand a model until you implement it

Eilers (MPIA) and I discussed puzzling results she was getting in which she could fit just about any data (including insanely random data) with the Gaussian Process latent variable model (GPLVM) but with no predictive power on new data. We realized that we were missing a term in the model: We need to constrain the latent variables with a prior (or regularization), otherwise the latent variables can go off to crazy corners of space and the data points have (effectively) nothing to do with one another. Whew! This all justifies a point we have been making for a while, which is that you never really understand a model until you implement it.

2017-11-13

modeling the heck out of the atmosphere

The day started with planning between Bedell (Flatiron), Foreman-Mackey (Flatiron), and I about a possible tri-linear model for stellar spectra. The model is that the star has a spectrum, which is drawn from a subspace in spectral space, and doppler shifted, and the star is subject to telluric absorption, which is drawn from a subspace in spectral space, and doppler shifted. The idea is to learn the telluric subspace using all the data ever taken from a spectrograph (HARPS, in this case). But of course the idea behind that is to account for the tellurics by simultaneously fitting them and thereby getting better radial velocities. This was all planning for the arrival of Ben Montet (Chicago), who arrived later in the day for a two-week visit.

At lunch time, Mike Blanton (NYU) gave the CCPP brown-bag talk about SDSS-V. He did a nice job of explaining how you measure the composition of ionized gas by looking at thermal state. And etc!

2017-11-10

detailed abundances of pairs; coherent red-giant modes

In the morning I sat in on a meeting of the GALAH team, who are preparing for a data release to precede Gaia DR2. In that meeting, Jeffrey Simpson (USyd) showed me GALAH results on the Oh et al comoving pairs of stars. He finds that pairs from the Oh sample that are confirmed to have the same radial velocity (and are therefore likely to be truly comoving) have similar detailed element abundances, and the ones that aren't, don't. So awesome! But interestingly he doesn't find that the non-confirmed pairs are as different as randomly chosen stars from the sample. That's interesting, and suggests that we should make (or should have made) a carefully constructed null sample for A/B testing etc. Definitely for Gaia DR2!

In the afternoon, I joined the USyd asteroseismology group meeting. We discussed classification of seismic spectra using neural networks (I advised against) or kernel SVM (I advised in favor). We also discussed using very narrow (think: coherent) modes in red-giant stars to find binaries. This is like what my host Simon Murphy (USyd) does for delta-Scuti stars, but we would not have enough data to phase up little chunks of spectrum: We would have to do one huge simultaneous fit. I love that idea, infinitely! I asked them to give me a KIC number.

I gave two talks today, making it six talks (every one very different) in five days! I spoke about the pros and cons of machine learning (or what is portrayed as machine learning on TV) as my final Hunstead Lecture at the University of Sydney. I ended up being very negative on neural networks in comparison to Gaussian processes, at least for astrophysics applications. In my second talk, I spoke about de-noising Gaia data at Macquarie University. I got great crowds and good feedback at both places. It's been an exhausting but absolutely excellent week.

2017-11-09

mixture of factor analyzers; centroiding stars

On this, day four of my Hunstead Lectures, Andy Casey (Monash) came into town, which was absolutely great. We talked about many things, including the mixture-of-factor-analyzers model, which is a good and under-used model in astrophysics. I think (if I remember correctly) that it can be generalized to heteroskedastic and missing data too. We also talked about using machine learning to interpolate models, and future projects with The Cannon.

At lunch I sat with Peter Tuthill (Sydney) and Kieran Larkin (Sydney) who are working on a project design that would permit measurement of the separation between two (nearby) stars to better than one millionth of a pixel. It's a great project; the designs they are thinking about involve making a very large, but very finely featured point-spread function, so that hundreds or thousands of pixels are importantly involved in the positional measurements. We discussed various directions of optimization.

My talk today was about The Cannon and the relationships between methods that are thought of as “machine learning” and the kinds of data analyses that I think will win in the long run.

2017-11-08

MCMC, asteroseismology, delta-Scutis

Today I am on my third of five talks in five days, as part of my Hunstead Lectures at Sydney. I spoke about MCMC sampling. A lot of what I said was a subset of things we write in our recent manual on MCMC. At the end of the talk there was some nice discussion of detailed balance, with contributions from Tuthill (USyd) and Sharma (USyd).

At lunch I grilled asteroseismology guru Tim Bedding (USyd) about measuring the large frequency difference delta-nu in a stellar light curve. My position is that you ought to be able to do this without explicitly taking a Fourier Transform, but rather as some kind of mathematical operation on the data. That is, I am guessing that there is a very good and clever frequentist estimator for it. Bedding expressed the view that there already is such a thing, in that there are methods for automatically generating delta-nu values. They do take a Fourier Transform under the hood, but they are nonetheless good Frequentist estimators. But I want to work on sparser data, like Gaia and LSST light curves. I need to understand this all better. We also talked about how it is possible for a gastrophysics-y star to have oscillations with quality factors better than 105. Many stars do!

That's all highly relevant to the work of Simon Murphy (USyd), who finds binary stars by looking at phase drifts in highly coherent delta-Scuti star oscillations. He and I spent an Afternoon of hacking on models for one of his delta-Scuti stars, with the hopes of measuring the quality factor Q and also maybe exploring new and more information-preserving methods for finding the binary companions. This method of finding binaries has similar sensitivity to astrometric methods, which makes it very relevant to the binaries that Gaia will discover.

2017-11-07

noise, calibration, and GALAH

Today I gave my second of five Hunstead Lectures at University of Sydney. It was about finding planets in the Kepler and K2 data, using our non-stationary Gaussian Process or linear model as a noise model. This is the model we wrote up in our Research Note of the AAS. In the question period, the question of confirmation or validation of planets came up. It is very real that the only way to validate most tiny planets is to make predictions for other data. But when will we have data more sensitive than Kepler? This is a significant problem for much of bleeding-edge astronomy.

Early in the morning I had a long call with Jason Wright (PSU) and Bedell (Flatiron) about the assessment of the calibration programs for extreme-precision RV surveys. My position is that it is possible to assess the end-to-end error budget in a data-driven way. That is, we can use ideas from causal inference to figure out what parts of the RV noise are coming from telescope plus instrument plus software. Wright didn't agree: He believes that large parts of the error budget can't be seen or calibrated. I guess we better start writing some kind of paper here.

In the afternoon I had a great discussion with Buder (MPIA), Sharma (USyd), and Bland-Hawthorn (USyd) about the current status of detailed elemental abundance measurements in GALAH. The element–element plots look fantastic, and clear trends and high precision are evident, just looking at the data. To extract these abundances, Buder has made a clever variant of The Cannon which makes use of the residuals away from a low-dimensional model to measure the detailed abundances. They are planning on doing a large data release in April.

2017-11-06

five talks in five days

On the plane to Sydney, I started an outline for a paper with Bedell (Flatiron) on detailed elemental abundances, and the dimensionality or interpretability of the elemental subspace. I also started to plan the five talks I am going to give in five days as the Hunstead Lecturer. On arrival I went straight to University of Sydney and started lecturing. My first talk was on fitting a line to data, with a concentration on the assumptions and their role in setting procedures. That is, I emphasized that you shouldn't choose a procedure by which you fit your data: You should choose a set of assumptions you are willing to make about your data. Once you do that, the procedure will flow from the assumptions. After my talk I had a great lunch with graduate students at Sydney. The range of research around the table was remarkable. I plan to spend some of the week learning about asteroseismology.

2017-11-03

best-ever detailed abundances

In Friday parallel-working session, Bedell (Flatiron) showed me all 900-ish plots of every element against every element for her sample of 80 Solar twins. Incredible. Outrageous precision, and outrageous structure. And it is a beautiful case where you can just see the precision directly in the figures: There are clearly real features at very small scales. And hugely informative structures. This is the ideal data set for addressing something that has been interesting me for a while: What is the dimensionality of the chemical-abundance space? And can we see different nucleosynthetic processes directly in the data?

Late in the day, Jim Peebles (Princeton) gave the Astro Seminar. He spoke about three related issues in numerical simulations of galaxies: They make bulges that are too large and round; they make halos that have too many stars; and they don't create a strong enough bimodality between disks and spheroids. There were many galaxy-simulators in the audience, so it was a lively talk, and a very lively dinner afterwards.

2017-11-02

combinatoric options for a paper

I had my weekly call with Bonaca (Harvard), about information theory and cold stellar streams. We discussed which streams we should be considering in our paper. We have combinatoric choices, because there are N streams and K Milky-Way parameters; we could constrain any combination of parameters with any combination of streams! And it is even worse than that, because we are talking about basis-function expansions for the Milky-Way potential, which means that K is tending to infinity! We tentatively decided to do something fairly comprehensive and live with the fact that we won't be able to fully interpret it with finite page charges.

2017-11-01

circumbinary planets, next-gen EPRV

The Gaia DR2 workshop and Stars Group meeting were both very well attended! At the former, Price-Whelan (Princeton) showed us PyGaia, a tool from Anthony Brown's group in Leiden to simulate the measurement properties of the Gaia Mission. It is really a noise model. And incredibly useful, and easy to use.

In the Stars meeting, so many things! Andrew Mann (Columbia) spoke about the reality or controversies around Planet 9, which got us arguing also about claims of extra-solar asteroids. Kopytova (ASU) described her project to sensitively find chemical abundance anomalies among stars with companions, and asked the audience to help find ways that true effects could be scooped. Her method is very safe, so it takes a near-conspiracy, I think, but Brewer (Yale) disagreed. Veselin Kostov (Goddard) talked about searching for circumbinary planets. This is a good idea! He has found a few in Kepler but believes there are more hidden. It is interesting for TESS for a number of reasons, one of which is that you can sometimes infer the period of the exoplanet with only a short stretch of transit data (much shorter than the period), by capitalizing on a double-transit across the binary.

Didier Queloz (Cambridge) was in town for the day. Bedell (Flatiron) and I discussed with him next-generation projects for HARPS and new HARPS-like instruments. He is pushing for extended campaigns on limited sets of bright stars. I like this idea for its statistical and experimental-design simplicity! But (as he notes) it is hard to get the heterogeneous community behind such big projects. He has a project to pitch, however, if people are looking to buy in to new data sources. He, Bedell, and I discussed what we know about limits to precision in this kind of work. We aren't far apart, in that we all agree that HARPS (and its competitors) are extremely well calibrated machines, much better calibrated than the end-to-end precision obtained.

2017-10-31

searches for anomalies

Today Kate Storey-Fisher (NYU) and I met with Mike Blanton (NYU) and Zhongxu Zhai (NYU) to discuss possible projects that Storey-Fisher and I have been talking about. We are thinking about trying to systematize (and pre-register) the search for anomalies in cosmological surveys. The idea (which is still vague) is to somehow lexicographically order all anomalies we could search for, and then search, such that we can keep exquisite track of the number of independent hypotheses we have checked.

Blanton and Zhai had some advice for us. One category of advice was around systematics: Anomalies and systematics in the data might appear similar! So we should think about anomalies that are somehow least sensitive to these systematics. One good thing is that we are working at the home of many of the tools that we need to make these assessments. Another category of advice was to think about what anomalies are motivated by questions of theory in the dark sector, in galaxy formation, or in the initial conditions. Theory-inspired (if not predicted) anomalies are more productive, in a scientific-literature sense, than randomly specified anomalies. We are close to being able to specify a project!

2017-10-30

detailed abundances and stellar companions

Taisiya Kopytova arrived in NYC for a few days to work on stellar abundances and orbital companions. Her project is very well designed: She has a set of red-giant stars in APOGEE where we know they have companions. For each of these stars with companions, she has found a set of matched stars—matched in stellar parameters—that don't have companions (or not companions that are detectable). She then compares the detailed chemical abundances between these two samples. The approach is extremely conservative and very robust to problems in the data: For a false effect to appear, it has to be an effect that causes a companion to be detected (or not detected)! And she finds signals.

One disturbing thing is that we find signal-to-noise effects, and we get slightly different results when we use APOGEE DR13 or DR14 data. So we might need to match on signal-to-noise as well as stellar parameters.

2017-10-27

paper scopes, voids

In Friday parallel-working session, Megan Bedell (Flatiron) and I discussed (for the nth time) the scope of the first paper and next papers in our extreme-precision radial-velocity work. We realized that paper 1 is pretty-much ready to go! We also realized that the point should not be about what people might be doing wrong, but about what things you can do that are easy and close to correct. In particular, the point that a data-driven spectral model can come close to saturating the Cramér–Rao bound on radial-velocity. This was not obvious at the outset, because some of the information in the data must go into the spectral model (and not the RV measurement). That's a good point!

Renée Hlozek (Toronto) gave the Astrophysics Seminar. In part she talked about the negative S–Z effect from voids. In another part, she talked about constraining light scalar dark matter with large-scale structure. Both problems I am interested in for near-future research. In the afternoon, she and Alex Malz (NYU) and I talked about advising and mentoring. Hlozek is a deep thinker about these things.

2017-10-26

geometry bugs

In our weekly meeting, Ana Bonaca (Harvard) and I discovered a super-subtle bug in how the covariance matrices we are making (context: the Cramér–Rao bound on Milky-Way parameters given observations (and a model) of cold stellar streams) are being plotted. Damn geometry is hard! But she fixed the code and all our covariances look really good now. We think we understand the trade-offs between different parameters, given different data. Time to write! And use the framework for planning new observations.

I spent the rest of the day not working on my NSF proposal, which is very bad!

2017-10-25

Gaia DR2 halo and disk projects

Today Kathryn Johnston (Columbia) came through Flatiron to discuss Gaia DR2 projects in the Milky-Way halo. She made the very nice point that we could use a Gaia simulator like PyGaia to “observe” the Bullock & Johnston all-substructure simulations to see how halo substructure appears in a realistic DR2 data set. We discussed clustering algorithms and the relationships between applying clustering directly to the observed data vs transforming the data to some better space (invariants, say) vs doing some kind of inference or data-driven model that respects the Gaia noise model and so on. We are looking for methods that will be powerful, but simple, since we are looking for fast projects to do in the immediate follow-up period to the data release.

Our conversation veered into chemical-abundance space, where we all realized that Megan Bedell (Flatiron) is sitting on an amazing chemical-tagging data set. She only has 80 stars, but because they are Solar twins, they have exceedingly good chemical measurements. Can we use these to measure scattering processes in the Milky-Way disk?

We also briefly discussed something inspired by Alyssa Goodman (Harvard), who spoke first thing in the morning at the Scientific Visualization conference that is on at Flatiron: Can we measure our position relative to the disk plane, and maybe see fluctuations in that plane? Goodman says that the Sun is 25 pc above the plane, and that is obvious (she says) from the radio observations of HI gas. But Bovy (if I recall correctly) looked at this in Gaia DR1 and finds that our offset from the midplane is less than 10 pc. Is there an offset between stars and gas? If so, why? If not, who is wrong? Great set of questions for DR2.

2017-10-24

heteroskedastic GPLVM; search for anomalies

Christina Eilers (MPIA) and I have decided to re-implement the Gaussian Process latent-variable model, with modifications that permit the data to be heteroskedastic (and missing) and the kernel function to be different along different dimensions of the data space. We spent an hour today de-bugging analytic derivatives. We need these, because there is a non-convex optimization as part of that model. We resolved to bring the action to New York and have Foreman-Mackey (Flatiron) help us re-implement everything in george. I was left with homework to write this model down in full generality.

Kate Storey-Fisher (NYU) and I got close to specifying a well-posed problem in our nascent project to find CMB-like anomalies in large-scale structure data. We read this paper by NYU locals about prospects for future surveys, but we want to work with real data if we can. We discussed how a search for anomalies can be cast as a parameter estimation problem. We haven't settled on a methodology, though.

2017-10-23

linear models for nuisances

The day started with Rodrigo Luger (UW) and Dan Foreman-Mackey (Flatiron) and me discussing a range of projects. They endorsed my general idea of looking for planets by searching resonances! Which is good. We tentatively decided to try to write one of the new ApJ Research Notes about our systematics models for Kepler and other projects. There are a lot of unifying good ideas there; let's spread the Good News. The idea is that it is possible to simultaneously fit a linear model and marginalize it out with a simple linear-algebra move. Work on that started almost immediately.

Lauren Anderson (Flatiron) visualized the proper motions of stars in the Galaxia model as a function of sky position and distance, to see if proper motions can be used to infer distances by methods that are more clever than reduced proper motion. It looks like they can be! We discussed further improvements to the visuals, with help from Vasily Belokurov (Cambridge).

2017-10-20

stellar age–velocity relation

Jonathan Bird (Vandy) and I spent the morning working together on his paper on the age–velocity relationship in the Milky-Way disk. He has absolutely beautiful results, from APOGEE red-clump stars and Gaia DR1 transverse kinematics. The thing that is new is that (thanks Martig and Ness) he has actually useful age estimates for many hundreds of stars. And we will have the same for tens of thousands in the overlap with Gaia DR2. Indeed, we commented in the paper that SDSS-V will make this possible at scale. The great thing about the ages is that even with hundreds of stars, we get a comparable measure of the age–velocity relation to studies that involved orders of magnitude more stars.

We discussed the final presentation in the paper. We worked through the figures and drew a simple graphical model to illustrate the project. We then went, very carefully, through the assumptions of the project, so we can state them explicitly at the outset of our methods section, and then use them to structure the discussion at the end. It's a fun intellectual exercise to go through these assumptions carefully; somehow you only understand a project substantially after it is finished!

2017-10-19

self-calibration of stellar abundances

I spent the day at Vanderbilt, where I gave a talk and had many valuable conversations. Some were about data science: Andreas Berlind (Vanderbilt) is chairing a committee to propose a model for data science at Vanderbilt. We discussed the details that have been important at NYU.

One impressive project I learned about today was Hypatia, a compendium of all detailed stellar abundance measurements (and relevant housekeeping data) in the literature. Over dinner, Natalie Hinkel (Vanderbilt) and I discussed the possibility that this catalog could be used for some kind of self-calibration of all abundance measurements. That's an interesting idea, and connects to things I have discussed over the years with Andy Casey (Monash).

2017-10-18

self-calibrating pulsar arrays, and much more

I had a great conversation with Chiara Mingarelli (Flatiron) and Ellie Schwab (AMNH) today about pulsar-timing arrays and gravitational-wave sources. We are developing some ideas about self-calibration of the arrays, such that we might be able to simultanously search for coherent sources (that is: not just stochastic backgrounds) and also precisely determine the distances to the individual pulsars to many digits of accuracy!. It is futuristic stuff, and there are lots of ways it might fail badly, but if I am right that the self-calibration of the arrays is possible, it would make the arrays a few to tens of times more sensitive to sources! We started with Mingarelli assigning us some reading homework.

In the Stars group meeting, we had a productive discussion led by Megan Bedell (Flatiron), Andrew Mann (Columbia), and John Brewer (Yale) about things learned at the recent #KnowThyStar conference. There are some new uses of machine learning and data-driven models that I might need to spend some time criticizing! And it appears that there are some serious discrepancies between asteroseismic scaling relations for stellar radii and interferometric measurements. Not bigger than those expected by the stellar experts, apparently, but much bigger than assumed by some of the exoplanet community.

Prior to that, in our weekly Gaia DR2 prep working session, we discussed the use of proper motion as a distance indicator in a post-reduced-proper-motion world. That is: The assumptions underlying reduced proper motion are not great, and will be strongly violated in the DR2 data set. So let's replace it with a much better thing!

Adrian Price-Whelan (Princeton) showed some incredible properties of (flowing from beautiful design of) the astropy coordinates package. Damn!

2017-10-17

writing projects

Coming off my personal success of (finally) getting a paper on the arXiv yesterday (check the footnote on the cover page), I worked through two projects that are close to being writeable or finishable. The first is a paper with Stephen Feeney (Flatiron) on the Lutz-Kelker correction, when to use it (never) and what it is (a correction from ML to MAP). The second is a document I wrote many months ago about finding similar or identical objects in noisy data. After I read through both, I got daunted by the work that needs to happen! So I borked. I love my job! But writing is definitely hard.

2017-10-16

discovery! submission!

It was an important day for physics: The LIGO/VIRGO collaboration and a huge group of astronomical observational facilities and teams announced the discovery of a neutron-star–neutron-star binary inspiral. It has all the properties it needs to have to be the source of r-process elements, as the theorists have been telling us it would. Incredible. And a huge win for everyone involved. Lots of questions remain (for me, anyway) about the 2-s delay between GW and EM, and about the confidence with which we can say we are seeing the r process!

It was also an unusual day for me: After working a long session on the weekend, Dan Foreman-Mackey (Flatiron) and I finished our pedagogical document about MCMC sampling. I ended the day by posting it to arXiv and submitting it (although this seems insane) to a special issue of the ApJ. I don't write many first-author publications, so this was a very, very good day.

2017-10-13

calibration of ZTF; interpolation

I am loving the Friday-morning parallel working sessions in my office. I am not sure that anyone else is getting anything out of them! Today Anna Ho (Caltech) and I discussed things in my work on calibration and data-driven models (two extremely closely related subjects) that might be of use to the ZTF and SEDM projects going on at Caltech.

Late in the morning, an argument broke out about using deep learning to interpolate model grids. Many projects are doing this, and it is interesting (and odd) to me that you would choose a hard-to-control deep network when you could use an easy-to-control function space (like a Gaussian Process, stationary or non-stationary). But the deep-learning toothpaste is hard to put back into the tube! That said, it does have its uses. One of my medium-term goals is to write something about what those uses are.

2017-10-12

age-velocity; finishing

I had a great, long call with Jonathan Bird (Vandy) to discuss his nearly-finished paper on the age–velocity relation of stars in the Gaia DR1 data. We discussed the addition of an old, hot population, in addition to the population that shows the age–velocity relation. That's a good idea, and accords with our beliefs, hence even gooder.

I spent the rest of my research time today working through the text of Dan Foreman-Mackey (Flatiron) and my MCMC tutorial. We are trying to finish it this week (after five-ish years)!

2017-10-11

WDs in Gaia, M33, M stars, and more

In our weekly parallel-working Gaia DR2 prep meeting, two very good ideas came up. The first is to look for substructure in the white-dwarf sequence and see if it can be interpreted in terms of binarity. This is interesting for two reasons. The first is that unresolved WD binaries should be the progenitors of Type Ia supernovae. The second is that they might be formed by a different evolutionary channel than the single WDs and therefore be odd in interesting ways. The second idea was to focus on giant stars in the halo, and look for substructure in 3+2-dimensional space. The idea is: If we can get giant distances accurately enough (and maybe we can, with a model like this), we ought to see the substructure in the Gaia data alone; that is: No radial velocities necessary. Of course we will have radial velocities (and chemistry) for a lot of the stuff.

In the stars group meeting, many interesting things happened: Anna Ho (Caltech) spoke about time-domain projects just starting at Caltech. They sure do have overwhelming force. But there are interesting calibration issues. She has accidentally found many (very bright!) flaring M stars, which is interesting. Ekta Patel (Arizona) talked about how M33 gets its outer morphology. Her claim is that it is not caused by its interaction with M31. If she's right, she makes predictions about dark-matter substructure around M33! Emily Stanford (Columbia) showed us measurements of stellar densities from exoplanet transits that are comparable to asteroseismology in precision. Not as good, but close! And different.

In the afternoon I worked on GALEX imaging with Dun Wang (NYU), Steven Mohammed (Columbia), and David Schiminovich (Columbia). We discussed how to release our images and sensitivity maps such that they can be responsibly used by the community. And Andrina Nicola (ETH) spoke about combining many cosmological surveys responsibly into coherent cosmological constraints. The problem is non-trivial when the surveys overlap volumetrically..

2017-10-10

a day at MIT

I spent the day today at MIT, to give a seminar. I had great conversations all day! Just a few highlights: Rob Simcoe and I discussed spectroscopic data reduction and my EPRV plans. He agreed that, in the long run, the radial-velocity measurements should be made in the space of the two-d pixel array, not extracted spectra. Anna Frebel and I discussed r-process stars, r-process elements, and chemical-abundance substructure in the Galaxy Halo. We discussed the immense amount of low-hanging fruit coming with Gaia DR2. I had lunch with the students, where I learned a lot about research going on in the Department. In particular Keaton Burns had interesting things to say about the applicability of spectral methods in solving fluid equations in some contexts. On the train up, I worked on the theoretical limits of self-calibration: What is the Cramér–Rao bound for flat-field components given a self-calibration program? This, for Euclid.

2017-10-09

Euclid and MCMC

I did some work on the (NYC) long weekend on two projects. In the first, I built some code to make possible observing strategies for the in-flight self-calibration program for ESA Euclid. Stephanie Wachter (MPIA) contacted me to discuss strategies and metrics for self-calibration quality. I wrote code, but realized that I ought to be able to deliver a sensible metric for deciding on dither strategy. This all relates to this old paper.

On Monday I discussed our nearly-finished MCMC paper with Dan Foreman-Mackey (Flatiron) and we decided to finish it for submission to the AAS Journals. I spent time working through the current draft and reformatting it for submission. There is lots to do, but maybe I can complete it this coming week?

2017-10-06

dust-hidden supernovae

In my weekly parallel-hacking, I re-learned how to use kplr with Elisabeth Andersson (NYU).

This was followed by a nice talk by Mansi Kasliwal (Caltech) about the overwhelming force on time-domain astronomy being implemented by her and others at Caltech. One of their projects will be imaging more than 3000 square degrees an hour! There isn't enough solid angle on the sky for them. She is finding lots of crazy transients that are intermediate in luminosity between supernovae and novae, and she doesn't know what they are. Also she may be finding the (long expected) fully-obscured supernovae. If she has found them, she may be doubling the observed supernova rates in nearby galaxies. Great stuff.

The day ended with lightning talks at the CCPP, with faculty introducing themselves to the new graduate students.

2017-10-05

uncertainty propagation

I started the day with a long discussion with Ana Bonaca (Harvard) about how to propagate uncertainties in Galactic gravitational potential parameters into some visualization about what we (in that context) know about the acceleration field. In principle, the acceleration field is more directly constrained (by our dynamical systems) than the potential. What we want (and it is ill-posed) is some visualization of what we know and don't know. Oddly, this conversation is a conversation about linear algebra above all else. We both admitted to each other on the call that we are both learning a lot of math in this project!

[My day ended early because: NYC Comic Con!]

2017-10-04

Gaia and exoplanets

At our weekly Gaia DR2 prep workshop, a bunch of good ideas emerged from Megan Bedell (Flatiron) about exoplanet and star science. Actually, some of the best ideas could be done right now, before DR2! These include looking at our already-known co-moving pairs of stars for examples with short-cadence Kepler data or known planetary systems. There is also lots to do once DR2 does come out. In this same workshop, David Spergel (Flatiron) summarized the work that the Gaia team has done to build a simulated universe in which to test and understand it's observations. These are useful for trying out projects in advance of the data release.

In the afternoon, everyone at the Flatiron CCA, at all levels, gave 2-minute, 1-slide lightning talks. It was great! There were many themes across the talks, including inference, fundamental physics, and fluid dynamics. On the first topic: There is no shortage of people at Flatiron who are thinking about how we might do better at learning from the data we have.

2017-10-03

systematizing surprise; taking logs

I had a substantial conversation with Kate Storey-Fisher (NYU) about possible anomaly-search projects in cosmology. The idea is to systematize the search for anomalies, and thereby get some control over the many-hypotheses issues. And also spin-off things around generating high-quality statistics (data compressions) for various purposes. We talked about the structure of the problem, and also what are the kinds of limited domains in which we could start. There is also a literature search we need to be doing.

I also made a Jupyter notebook for Megan Bedell (Flatiron), demonstrating that there is a bias when you naively take the log of your data and average the logs, instead of averaging the data. This bias is there even when you aren't averaging; in principle you ought to correct any model you make of the log of data for this effect, or at least when you transform from linear space to log or back again. Oh wait: This is only relevant if you are not also transforming the noise model appropriately! Obviously you should transform everything self-consistently! In this case we have nearly-Gaussian noise in the linear space (because physics) and we want to treat the noise in the log space as also linear (because computational tractability). Fortunately we are working with very high signal-to-noise data, so these biases are small.

2017-10-02

exploration vs exploitation

I met with Lauren Anderson (Flatiron) first-thing to figure out how we can munge our hacky #GaiaSprint projects into real and cutting-edge measurements of the Milky Way. We looked at the VVV infrared survey because it ought to be better than 2MASS for mapping the inner disk and bulge. We looked at using SDSS photometry to map the halo. On the latter, the dust modeling is far simpler, because for distant stars, the dust is just a screen, not an interspersed three-dimensional field. We also discussed the ever-present issue for a postdoc (or any scientist): How much time should you spend exploiting things you already know, and how much exploring new things you want to learn?

In the morning I also discussed the construction of (sparse) interpolation operators and their derivatives with Megan Bedell (Flatiron).

At lunch, Yacine Ali-Haimoud (NYU) gave a great brown-bag talk on the possibility that black holes make up the dark matter. He showed that there are various different bounds, all of which depend on rich astrophysical models. In the end, constraints from small-scale clustering rule it out (he thinks). Matt Kleban (NYU) and I argued that the primordial black holes could easily be formed in some kind of glass that has way sub-Poisson local power. Not sure if that's true!

2017-09-30

LIGO noise correlations

I spent some weekend science time reading this paper on LIGO noise that claims that the time delays in the LIGO detections (between the Louisiana and Washington sites) are seen in the noise too—that is, that the time delays or coincidence aspects of LIGO detections are suspect. I don't understand the paper completely, but they show plots (Figure 3) that show very strong phase–frequency relationships in data that are supposed to be noise-dominated. That's strange; if there are strong phase–frequency relationships, then there are almost always visible structures in real space. (To see this, imagine what happens as you modify the zero of time: The phases wind up!) Indeed, it is the phases that encode real-space structure. I don't have an opinion on the bigger question yet, but I would like to have seen the real-space structures creating the phase–frequency correlations they show.

2017-09-29

the life stories of counter-rotating galaxies

Today was the third experiment with Friday-morning parallel working in my office. It is like a hack week spread out over months! The idea is to work in the same place and build community. During the session, I worked through a multi-linear model for steallar spectra and tellurics with Bedell, based on conversations with Foreman-Mackey earlier in the week. I also worked through a method for generating realistic and self-consistent p(z) functions for fake-data experiments with Malz. This is a non-trivial problem: It is hard to generate realistic fake data, it is even harder to generate realistic posterior PDFs that might come out of a probabilistic set of analyses of those data.

Just before lunch, Tjitske Starkenburg (Flatiron) gave the NYU Astro Seminar. She mainly talked about counter-rotating galaxies. She took the unusual approach of following up, in the simulations she has done, some typical examples (where the stars rotate opposite to the gas) and figure out their individual histories (of accretion and merging and movement in the large-scale structure). Late in the day, she and I returned to these subjects to figure out if there might be ways to read a galaxy's individual cosmological-context history off of its present-day observable properties. That's a holy grail of galaxy evolution.

2017-09-28

what's the point of direct-detection experiments?

In the morning I spoke with Ana Bonaca (Harvard) and Chris Ick (NYU) about their projects. Bonaca is looking at multipole expansions of the Milky Way potential from an information-theory (what can we know?) point of view. We are working out how to visualize and test her output. Ick is performing Bayesian inference on a quasi-periodic model for Solar flares. He needs to figure out how to take his output and make a reliable claim about a flare being quasi-periodic (or not).

Rouven Essig (Stonybrook) gave a nice Physics Colloquium about direct detection of dark matter. He is developing strong limits on dark matter that might interact with leptons. The nice thing is that such a detection would be just as important for the light sector (new physics) as for the dark sector. He gave a good overview of the direct-detection methods. After the talk, we discussed the challenge of deciding what to do as non-detections roll in. This is not unlike the issues facing accelerator physics and cosmology: If the model is just what we currently think, then all we are doing is adding precision. The nice thing about cosmology experiments is that even if we don't find new cosmological physics, we usually discover and measure all sorts of other things. Not so true with direct-detection experiments.

2017-09-27

Gaia, EPRV, photons

In our Gaia DR2 prep workshop, Stephen Feeney (Flatiron) led a discussion on the Lutz–Kelker correction to parallaxes, and when we should and shouldn't use it. He began by re-phrasing the original LK paper in terms of modern language about likelihoods and posteriors. Once you put it in modern language, it becomes clear that you should (almost) never use these kinds of corrections. It is especially wrong to use them in the context of Cepheid (or other distance-ladder) cosmology; this is an error in the literature that Feeney has uncovered.

That discussion devolved into a discussion of the Gaia likelihood function. Nowhere in the Gaia papers does it clearly say how to reconstruct a likelihood function for the stellar parallaxes from the catalog, though it does give a suggestion in the nice papers by Astraatmadja, such as this one. Astraatmadja is a Gaia insider, so his suggestion is probably correct, but there isn't an equivalent statement in the official data-release papers (to my knowledge). There is a big set of assumptions underlying this likelihood function (which is the one we use); we unpacked them a bit in the meeting. My position is that this is so important, it might be worth writing a short note for arXiv.

In Stars group meeting, Megan Bedell (Flatiron) showed her current status on measuring extremely precise radial velocities using data-driven models for the star and the tellurics. It is promising that her methods seem to be doing better than standard pipelines; maybe she can beat the world's best current precision?

Chuck Steidel (Caltech) gave a talk in the afternoon about things he can learn about ionizing photons from galaxies at high redshift by stacking spectra. He had a number of interesting conclusions. One is that high-mass-star binaries are important! Another is that escape fraction for ionizing photons goes up with the strength of nebular lines, and down with total UV luminosity. He had some physical intuitions for these results.

2017-09-26

machine learning

The day started with a somewhat stressful call with Hans-Walter Rix (MPIA), about applied-math issues: How to make sure that numerical (as opposed to analytic) derivatives are calculated correctly, how to make sure that linear-algebra operations are performed correctly when matrices are badly conditioned, and so on. The context is: Machine-learning methods have all sorts of hard numerical issues under the hood. If you can't follow those things up correctly, you can't do correct operations with machine-learning models. It's stressful, because wrongness here is wrongness everywhere.

Later in the morning, Kilian Walsh (NYU) brought to me some ideas about making the connections between dark-matter simulations and observed galaxies more flexible on the theoretical / interpretation side. We discussed a possible framework for immensely complexifying the connections between dark-matter halos and galaxy properties, way beyond the currently-ascendent HOD models. What we wrote down is interesting, but it might not be tractable.

2017-09-25

thermal relics

In a low-research day, I discussed probabilistic model results with Axel Widmark (Stockholm), a paper title and abstract with Megan Bedell (Flatiron), and Gaia DR2 Milky Way mapping with Lauren Anderson (Flatiron).

The research highlight of the day was an excellent brown-bag talk by Josh Ruderman (NYU) about thermal-relic models for dark matter. It turns out there is a whole zoo of models beyond the classic WIMP. In particular, the number-changing interactions don't need to interact with the visible sector. The models can be protected by dark-sector layers and have very indirect (or no) connection to our sector. We discussed the differences between models that are somehow likely or natural and models that are somehow observable or experimentally interesting. These two sets don't necessarily overlap that much!

2017-09-21

GPLVM Cannon

Today Markus Bonse (Darmstadt) showed me (and our group: Eilers, Rix, Schölkopf) his Gaussian-Process latent-variable model for APOGEE spectra. It looks incredible! With only a few latent variable dimensions, it does a great job of explaining the spectra, and its performance (even under validation) improves as the latent dimensionality increases. This is something we have wanted to do to The Cannon for ages: Switch to GP functions and away from polynomials.

The biggest issue with the vanilla GPy GPLVM implementation being used by Bonse is that it treats the data as homoskedastic—all data points are considered equal. When in fact we have lots of knowledge about the noise levels in different pixels, and we have substantial (and known) missing and bad data. So we encouraged him to figure out how to implement heteroskedasticity. We also discussed how to make a subspace of the latent space interpretable by conditioning on known labels for some sources.

2017-09-20

SDSS+Gaia

At our new weekly Gaia DR2 prep meeting, Vasily Belokurov (Cambridge) showed us a catalog made by Sergei Koposov (CMU) which joins SDSS imaging and Gaia positions to make a quarter-sky, deep proper-motion catalog. His point: Many projects we want to do with Gaia DR2 we can do right now with this new matched catalog!

At the Stars group meeting, Ruth Angus led a discussion of possible TESS proposals. These are due soon!

2017-09-19

unresolved binaries

Today Axel Widmark (Stockholm) showed up in NYC for two weeks of collaboration. We talked out various projects and tentatively decided to look at the unresolved binary stars in the Gaia data. That is, do some kind of inference about whether stars are single or double, and if double, what their properties might be. This is for stars that appear single to Gaia (but, if truly double, are brighter than they should be). I suggested we start by asking “what stars in the data can be composed of two other stars in the data?” with appropriate marginalization.

2017-09-18

latent-variable models for stars

The day started with various of us (Rix, Eilers, Schölkopf, Bonse) reviewing Bonse's early results on applying a GPLVM to stellar spectra. This looks promising! We encouraged Bonse to visualize the models in the space of the data.

The data-driven latent-variable models continued in the afternoon with Megan Bedell and I discussing telluric spectral models. We were able to debug a sign error and then make a PCA-like model for telluric variations! The results are promising, but there are continuum level issues everywhere, and I would like a more principled approach to that. Indeed, I could probably write a whole book about continuum normalization at this point (and still not have a good answer).

2017-09-17

regression

Our data-driven model for stars, The Cannon, is a regression. That is, it figures out how the labels generate the spectral pixels with a model for possible functional forms for that generation. I spent part of today building a Jupyter notebook to demonstrate that—when the assumptions underlying the regression are correct—the results of the regression are accurate (and precise). That is, the maximum-likelihood regression estimator is a good one. That isn't surprising; there are very general proofs; but it answers some questions (that my collaborators have) about cases where the labels (the regressors) are correlated in the training set.

2017-09-15

new parallel-play workshop

Today was the first try at a new group-meeting idea for my group. I invited my NYC close collaborators to my (new) NYU office (which is also right across the hall from Huppenkothen and Leistedt) to work on whatever they are working on. The idea is that we will work in parallel (and independently), but we are all there to answer questions, discuss, debug, and pair-code. It was intimate today, but successful. Megan Bedell (Flatiron) and I debugged a part of her code that infers the telluric absorption spectrum (in a data-driven way, of course). And Elisabeth Andersson (NYU) got kplr and batman installed inside the sandbox that runs her Jupyter notebooks.

2017-09-14

latent variable models, weak lensing

The day started with a call with Bernhard Schölkopf (MPI-IS), Hans-Walter Rix (MPIA), and Markus Bonse (Darmstadt) to discuss taking Christina Eilers's (MPIA) problem of modeling spectra with partial labels over to a latent-variable model, probably starting with the GPLVM. We discussed data format and how we might start. There is a lot of work in astronomy using GANs and deep learning to make data generators. These are great, but we are betting it will be easier to put causal structure that we care about into the latent-variable model.

At Cosmology & Data Group Meeting at Flatiron, the whole group discussed the big batch of weak lensing results released by the Dark Energy Survey last month. A lot of the discussion was about understanding the covariances of the likelihood information coming from the weak lensing. This is a bit hard to understand, because everyone uses highly informative priors (for good reasons, of course) from prior data. We also discussed the multiplicative bias and other biases in shape measurement; how might we constrain these independently from the cosmological parameters themselves? Data simulations, of course, but most of us would like to see a measurement to constrain them.

At the end of Cosmology Meeting, Ben Wandelt (Flatiron) and I spent time discussing projects of mutual interest. In particular we discussed dimensionality reduction related to galaxy morphologies and spatially resolved spectroscopy, in part inspired by the weak-lensing discussion, and also the future of Euclid.

2017-09-13

Gaia, asteroseismology, robots

In our panic about upcoming Gaia DR2, Adrian Price-Whelan and I have established a weekly workshop on Wednesdays, in which we discuss, hack, and parallel-work on Gaia projects in the library at the Flatiron CCA. In our first meeting we just said what we wanted to do, jointly edited a big shared google doc, and then started working. At each workshop meeting, we will spend some time talking and some time working. My plan is to do data-driven photometric parallaxes, and maybe infer some dust.

At the Stars Group Meeting, Stephen Feeney (Flatiron) talked about asteroseismology, where we are trying to get the seismic parameters without ever taking a Fourier Transform. Some of the crowd (Cantiello in particular) suggested that we have started on stars that are too hard; we should choose super-easy, super-bright, super-standard stars to start. Others in the crowd (Hawkins in particular) pointed out that we could be using asteroseismic H-R diagram priors on our inference. Why not be physically motivated? Duh.

At the end of Group Meeting, Kevin Schawinski (ETH) said a few words about auto-encoders. We discussed imposing more causal structure on them, and seeing what happens. He is going down this path. We also veered off into networks-of-autonomous-robots territory for LSST follow-up, keying off remarks from Or Graur (CfA) about time-domain and spectroscopic surveys. Building robots that know about scientific costs and utility is an incredibly promising direction, but hard.

2017-09-12

statistics of power spectra

Daniela Huppenkothen (NYU) came to talk about power spectra and cross-spectra today. The idea of the cross-spectrum is that you multiply one signal's Fourier transform against the complex conjugate of the others'. If the signals are identical, this is the power spectrum. If they differ by phase lags, the answer has an imaginary part, and so on. We then launched into a long conversation about the distribution of cross-spectrum components given distributions for the original signals. In the simplest case, this is about distributions of sums of products of Gaussian-distributed variables, where analytic results are rare. And that's the simplest case!

One paradox or oddity that we discussed is the following: In a long time series, imagine that every time point gets a value (flux value, say) that is drawn from a very skew or very non-Gaussian distribution. Now take the Fourier transform. By central-limit reasoning, all the Fourier amplitudes must be very close to Gaussian-distributed! Where did the non-Gaussianity go? After all, the FT is simply a rotation in data space. I think it probably all went into the correlations of the Fourier amplitudes, but how to see that? These are old ideas that are well understood in signal processing, I am sure, but not by me!

2017-09-11

EPRV

Today I met with Megan Bedell, who is just about to start work here in the city at the Flatiron Institute. We discussed our summer work on extreme precision radial-velocity measurements. We have come to the realization that we can't write a theory paper on this without dealing with tellurics and continuum, so we decided to face that in the short term. I don't want to get too bogged down, though, because we have a very simple point: Some ways of measuring the radial velocity saturate the Cramér–Rao bound, many do not!

2017-09-08

reconstruction, modifications to GR

The day started with a conversation with Elisabeth Andersson (NYU) about possible projects. We tentatively decided to look for resonant planets in the Kepler data. I sent her the Luger paper on TRAPPIST-1.

Before lunch, there was a great Astro Seminar by Marcel Schmittfull (IAS) about using the non-linearities in the growth of large-scale structure to improve measurements of cosmological parameters. He made two clear points (to me): One is that the first-order "reconstruction" methods used to run back the clock on nonlinear clustering can be substantially improved upon (and even small improvements can lead to large improvements in cosmological parameter estimation). The other is that there is as much information about cosmological parameters in the skewness as the variance (ish!). After his talk I asked about improving reconstruction even further using machine learning, which led to a conversation with Marc Williamson (NYU) about a possible pilot project.

In the afternoon, after a talk about crazy black-hole ideas from Ram Brustein (Ben-Gurion), Matt Kleban (NYU) and I discussed the great difficulty of seeing strong-field corrections to general relativity in gravitational-wave measurements. The problem is that the radiation signal is dominated by activity well outside the Schwarzschild radius: Things close to the horizon are highly time-dilated and red-shifted and so don't add hugely to the strong parts of the signal. Most observable signatures of departures from GR are probably already ruled out by other observations! With the standard model, dark matter, dark energy, and GR all looking like they have no observational issues, fundamental physics is looking a little boring right now!

2017-09-07

a non-parametric map of the MW halo

The day started with a call with Ana Bonaca (CfA), in which we discussed generalizing her Milky Way gravitational potential to have more structure, substructure, and freedom. We anticipate that when we increase this freedom, the precision with which any one cold stellar stream constrains the global MW potential should decrease. Eventually, with a very free potential, in principle each stream should constrain the gravitational acceleration field in the vicinity of that stream! If that's true, then a dense network of cold streams throughout the Milky Way halo would provide a non-parametric (ish) map of the acceleration field throughout the Milky Way halo!

In the afternoon I pitched new projects to Kate Storey-Fisher (NYU). She wants to do cosmology! So I pitched the projects I have on foregrounds for next-generation CMB and line-intensity mapping experiments, and my ideas about finding anomalies (and new statistics for parameter estimation) in a statistically responsible way. On the latter, I warned her that some of the relevant work is in the philosophy literature.

2017-09-06

MW dynamics

At Flatiron Stars Group Meeting, Chervin Laporte (Columbia) led a very lively discussion of how the Sagittarius and LMC accretion events into the Milky Way halo might be affecting the Milky Way disk. There can be substantial distortions to the disk from these minor mergers, and some of the action comes from the fact that the merging satellite raises a wake or disturbance in the halo that magnifies the effect of the satellite itself. He has great results that should appear on the arXiv soon.

After that, there were many discussions about things Gaia-related. We decided to start a weekly workshop-like meeting to prepare for Gaia DR2, which is expected in April. We are not ready! But when you are talking about billions of stars, you have to get ready in advance.

One highlight of the day was a brief chat with Sarah Pearson (Columbia), Kathryn Johnston (Columbia), and Adrian Price-Whelan (Princeton) about the formal structure of our cold-stream inference models, and the equivalence (or not) of our methods that run particles backwards in time (to a simpler distribution function) or forwards in time (to a simpler likelihood function). We discussed the possibility of differentiating our codes to permit higher-end sampling. We also discussed the information content in streams (work I have been doing with Ana Bonaca of Harvard) and the toy quality of most of the models we (and others) have been using.

2017-09-05

not much; but which inference is best?

Various tasks involved in the re-start of the academic year took out my research time today. But I did have a productive conversation with Alex Malz (NYU) about his current projects and priorities. One question that Malz asked is: Imagine you have various Bayesian inference methods or systems, each of which performs some some (say) Bayesian classification task. Each inference outputs probabilities over classes. How can you tell which inference method is the best? That's a hard problem! If you have fake data, you could ask which puts the highest probabilities on the true answer. Or you could ask which does the best when used in Bayesian decision theory, with some actions (decisions) and some utilities, or a bag of actors with different utilities. After all, different kinds of mistakes cost different actors different amounts! But then how do you tell which inference is best on real (astronomical) data, where you don't know what the true answer is? Is there any strategy? Something about predicting new data? Or is there something clever? I am out of my league here.

2017-09-01

#LennartFest day 3

Today was the last day of a great meeting. Both yesterday and today there were talks about future astrometric missions, including the Gaia extension, and also GaiaNIR, SmallJasmine, and Theia. In his overview talk on the latter, Alberto Krone-Martins put a lot of emphasis on the internal monitoring systems for the design, in which there will be lots of metrology of the spacecraft structure, optics, and camera. He said the value of this was a lesson from Gaia.

This point connects strongly to things I have been working on in self-calibration. In the long run, if a survey is designed properly, it will contain enough redundancy to permit self-calibration. In this sense, the internal monitoring has no long-term value. For example, the Gaia spacecraft includes a basic-angle monitor. But in the end, the data analysis pipeline will determine the basic angle continuously, from the science data themselves. They will not use the monitor data directly in the solution. The reason is: The information about calibration latent in the science data always outweighs what's in the calibration data.

That said (and Timo Prusti emphasized this to me), the internal monitoring and calibration data are very useful for diagnosing problems as they arise. So I'm not saying you don't value such systems and data; I'm saying that you should still design your projects to you don't need them at the end of the day. This is exactly how the SDSS imaging-data story played out, and it was very, very good.

I also gave my own talk at the meeting today. My slides are here. I think I surprised some part of the audience when I said that I thought we could do photometric parallax at all magnitudes without ever using any physical or numerical model of stars!

One thing I realized, as I was giving the talk, is that there is a sense in which the data-driven models make very few assumptions indeed. They assume that Gaia's geometric parallax measurements are good, and that it's noise model is close to correct. But the rest is just very weak assumptions about functional forms. So there is a sense in which our data-driven model (or a next-generation one) is purely geometric. Photometric parallaxes with a purely geometric basis. Odd to think of that.

At the end of the meeting, Amina Helmi told me about vaex, which is a very fast visualization tool for large data sets, built on clever data structures. I love those!

2017-08-31

#LennartFest day 2

Many great things happened at the meeting today; way too many to mention. Steinmetz showed how good the RAVE-on results are, and nicely described also their limitations. Korn showed an example of an extremely underluminous star, and discussed possible explanations (most of them boring data issues). Brown explained that with a Gaia mission extension, the parameter inference for exoplanet orbit parameters can improve as a huge power (like 4.5?) of mission lifetime. That deserves more thought. Gerhard explained that the MW bar is a large fraction of the mass of the entire disk! Helmi showed plausible halo substructure and got me really excited about getting ready for Gaia DR2. In the questions after her talk, Binney claimed that galaxy halos don't grow primarily by mergers, not even in theory! Hobbs talked about a mission concept for a post-Gaia NIR mission (which would be incredible). He pointed out that the reference frame and stellar positions require constant maintenance; the precision of Gaia doesn't last.

One slightly (and embarrassingly) frustrating thing about the talks today was that multiple discussed open clusters without noting that we found a lot ourselves. And several discussed the local standard of rest without mentioning our value. Now of course I (officially) don't mind; neither of these are top scientific objectives for me (I don't even think the LSR exists). But it is a Zen-like reminder not to be attached to material things (like citations)!

2017-08-30

#LennartFest day 1

I broke my own rules and left #AstroHackWeek to catch up with #LennartFest. The reason for the rule infraction is that the latter meeting is the retirement celebration of Lennart Lindegren (Lund) who is one of the true pioneers in astrometry, and especially astrometry in space and at scale. My loyal reader knows his influence on me!

Talks today were somewhat obscured by my travel exhaustion. But I learned some things! Francois Mignard (Côte d'Azur) gave a nice talk on the reference frame. He started with an argument that we need a frame. I agree that we want inertial proper motions, but I don't agree that they have to be on a coordinate grid. If there is one thing that contemporary physics teaches us it is that you don't need a coordinate system. But the work being done to validate the inertial-ness of the frame is heroic, and important.

Floor van Leewen (Cambridge) spoke about star clusters. He hypothesized—and then showed—that proper motions can be as informative about distance as parallaxes, especially for nearby clusters. This meshes with things Boris Leistedt (NYU) and I have been talking about, and I think we can lay down a solid probabilistic method for combining these kinds of information responsibly.

Letizia Capitanio (Paris) reminded us (I guess, but it was new to me) that the Gaia RVS instrument captures a diffuse interstellar band line. This opens up the possibility that we could do kinematic dust mapping with Gaia! She also showed some competitive dust maps based on Gaussian Process inferences.