2019-08-31

self-calibration for stellar abundances

The self-calibration idea is extremely powerful. There are many ways to describe it, but one is that you can exploit your beliefs about causal structure to work out which trends in your data are real, and which are spurious from, say, calibration issues. For example, if you know that there is a set of stars that don't vary much over time, the differences you see in their magnitudes on repeat observations probably have more to do with throughput variations in your system than real changes to the stars. And your confidence is even greater if you can see the variation correlate with airmass! This was the basis of the photometric calibration (that I helped design and build) of the Sloan Digital Sky Survey imaging, and similar arguments have underpinned self-calibrations of cosmic microwave background data, radio-telescope atmospheric phase shifts, and Kepler light curves, among many other things.

The idea I worked on today relates to stellar abundance measurements. When we measure stars, we want to determine absolute abundances (or abundances relative to the Sun, say). We want these abundances to be consistent across stars, even when those stars have atmospheres at very different temperatures and surface gravities. Up to now, most calibration has been at the level of checking that clusters (particularly open clusters) show consistent abundances across the color–magnitude diagram. But we know that the abundance distribution in the Galaxy ought to depend strongly on actions, weakly on angles, and essentially not at all (with some interesting exceptions) on stellar temperature, nor surface gravity, nor which instrument or fiber took the spectrum. So we are all set to do a self-calibration! I wrote a few words about that today, in preparation for an attempt.

2019-08-28

not much

I had a travel day today. Great for getting work done! But all I did was write a few paragraphs in my paper with Bedell on radial-velocity bounds.

2019-08-27

modeling coronograph residuals

Mattias Samland (MPIA), as part of his PhD dissertation, adapted the CPM model we built to calibrate (and image-difference) Kepler and TESS imaging to operate on direct imaging of exoplanets. The idea is that the direct imaging is taken over time, and speckles move around. They move around continuously and coherently, so a data-driven model can capture them, and distinguish them from a planet signal. (The word "causal" is the C in CPM, because it is about the differences between how systematics and real signals present themselves in the data.) There is lots of work in this area (including my own), but it tends to make use of the spatial (and wavelength) rather than temporal coherence. The CPM is all about time. It turns out this works extremely well; Samland's adaptation of CPM looks like it outperforms spatial methods, especially at small “working angles” (near the nulled star; this is coronography!).

But of course a model that uses the temporal coherence but ignores the spatial and wavelength coherence of the speckles cannot be the best model! There is coherence in all four directions (time, two angles, and wavelength) and so a really good speckle model must be possible. That's a great thing to work on in the next few years, especially with the growing importance of coronographs at ground-based and space-based observatories, now and in the future. Samland and I discussed all this, and specifics of the paper he is nearly ready to submit.

2019-08-26

unsupervised spectral modeling

I'm very proud of the things we have done over the years with our project called The Cannon, in which we learn a generative model of stellar spectra from stellar labels, all data driven, and then use that generative model to label other stellar spectra. This system has been successful, but it is also robust against certain kinds of over-fitting, because it is formulated as a regression from labels to data (and not the other way around). However, The Cannon has some big drawbacks. One is that (in its current form) the function space is hard-coded to be polynomial, which is both too flexible and not flexible enough, depending on context. Another is that the spectral representation is the pixel basis, which is just about the worst possible representation, given spectra of stars filled with known absorption lines at fixed resolution. And another is that the model might need latent freedoms that go beyond the known labels, either because the labels have issues (are noisy) or some are missing or they are incomplete (the full set of labels isn't sufficient to predict the full spectrum).

This summer we have discussed projects to address all three of these issues. Today I worked down one direction of this with Adam Wheeler (Columbia): The idea is to build a purely linear version of The Cannon but where each star is modeled using a generative model built just on its near neighbors. So you get the simplicity and tractability of a linear model but the flexibility of non-parametrics. But we also are thinking about operating in a regime in which we have no labels! Can we measure abundance differences between stars without ever knowing the absolute abundances? I feel like it might be possible if we structure the model correctly. We discussed looking at Eu and Ba lines in APOGEE spectra as a start; outliers in Eu or Ba are potentially very interesting astrophysically.

2019-08-23

setting the scope for the next GD-1 paper

Today (and really over the last few days as well) I had a long discussion with Ana Bonaca (Harvard) about the results of our spectroscopy in the GD-1 stellar-stream fields. As my loyal reader knows, Bonaca, Price-Whelan, and I have a prediction for what the radial velocities should look like in the stream, if it is a cold stream that has been hit by a massive perturber. Our new velocity measurements (with the Hectochelle instrument) are not the biggest and best possible confirmation of that prediction!

However, our velocities are not inconsistent with our predictions either. The question is: What to say in our paper about them? We listed the top conclusions of the spectroscopy, and also discussed the set of figures that would bolster and explain those conclusions. Now to plotting and writing.

Along the way to understanding these conclusions, I think Bonaca has found a systematic issue (at extremely fine radial-velocity precision) in the way that the Hectochelle instrument measures radial velocities. I hope we are right, because if we are, the GD-1 stream might become very cold, and our velocity constraints on any perturbation will become very strong. But we will follow up with the Hectochelle team next week. It's pretty subtle.

2019-08-22

probabilistic catalogs and likelihoods

Today I was finally back up at MPIA. I spent a good fraction of the day talking with Doug Finkbeiner (Harvard), Josh Speagle (Harvard) and others about probabilistic catalogs. Both Finkbeiner's group and my own have produced probabilistic catalogs. But these are not usually a good idea! The problem is that they communicate (generally) posterior information and not likelihood information. It is related to the point that you can't sample a likelihood! The big idea is that knowledge is transmitted by likelihood, not posterior. A posterior contains your beliefs and your likelihood. If I want to update my beliefs using your catalog, I need your likelihood, and I don't want to take on your prior (your beliefs) too.

This sounds very ethereal, but it isn't: The math just doesn't work out if you get a posterior catalog and want to do science with it. You might think you can save yourself by dividing out the prior but (a) that isn't always easy to do, and (b) it puts amazingly strong constraints on the density of your samplings; unachievable in most real scientific contexts. These problems are potentially huge problems for LSST and future Gaia data releases. Right now (in DR2, anyway) Gaia is doing exactly the correct thing, in my opinion.

2019-08-21

making a fake spectroscopic catalog

My enforced week off work has been awesome for writing code. I deepened my knowledge and interest in the Google (tm) Colaboratory (tm) by writing a notebook (available here) that constructs fake stars in a fake galaxy and observes them noisily in a fake spectroscopic survey. This is in preparation for measuring the selection function and doing inference to determine the properties of the whole galaxy from observations of the selected, noisily observed stars. This in turn relates to the paper on selection functions and survey design that I am writing with Rix (MPIA); it could be our concrete example.

2019-08-20

visualizing substructure in large data

Today Doug Finkbeiner (Harvard), Josh Speagle (Harvard), and Ana Bonaca (Harvard) came to visit me in my undisclosed location in Heidelberg. We discussed many different things, including Finkbeiner's recent work on finding outliers and calibration issues in the LAMOST spectral data using a data-driven model, and Speagle's catalog of millions of stellar properties and distances in PanSTARRS+Gaia+2MASS+WISE.

Bonaca and I took that latter catalog and looked at new ways to visualize it. We both have the intuition that good visualization could and will pay off in these large surveys. Both in terms of finding structures and features, and giving us intuition about how to build automated systems that will then look for structures and features. And besides, excellent visualizations are productive in other senses too, like for use in talks and presentations. I spent much of my day coloring stars by location in phase space or the local density in phase space, or both. And playing with the color maps!

There's a big visualization literature for these kinds of problems. Next step is to try to dig into that.

2019-08-19

writing out of hand

When I am stuck in a quiet attic room, doing nothing but writing, I tend to go off the rails! This has happened in my paper with Rix about target selection for catalogs and surveys: It is supposed to be about survey design and now it has many pages about the likelihood function. It's a mess. Is it two papers? Or is it a different paper?

2019-08-17

more catalog likelihoods

I resolved (for now, to my current satisfaction) my issues from a few days ago, about likelihoods for catalogs. I showed that the likelihood that I advocate does not give biased inferences, and does permit inference of the selection function (censoring process) along with the inference of the world. I did this with my first ever use of the Google (tm) Colaboratory (tm). I wanted to see if it works, and it does. My notebook is here (subject to editing and changing, so no promises about its state when you go there). If your model includes the censoring process—that is, if you want to parameterize and learn the catalog censoring along with the model of the world—then (contra Loredo, 2004) you have to use a likelihood function that depends on the selection function at the individual-source level. And I think it is justified, because it is the assumption that the universe plus the censoring is the thing which is generating your catalog. That's a reasonable position to take.

2019-08-15

writing, talking

I'm stuck in bed with a bad back. I have been for a few days now. I am using the time to write in my summer writing projects, and talk to students and postdocs by Skype (tm). But it is hard to work when out sick, and it isn't necessarily a good idea. I'm not advocating it!

2019-08-14

struggling with likelihoods

I worked more on my selection-function paper with Rix. I continued to struggle with understanding the controversy (between Loredo on one hand and various collaboration of my own on the other) about the likelihood function for a catalog. In my view, if you take a variable-rate Poisson process, and then censor it, where the censoring depends only on the individual properties of the individual objects being censored, you get a new variable-rate Poisson process with just a different rate function. If I am right, then there is at least one way of thinking about things such that the likelihood functions in the Bovy et al and Foreman-Mackey et al papers are correct. My day ended with a very valuable phone discussion of this with Foreman-Mackey. He (and I) would like to understand what is the difference in assumptions between us and Loredo.

I also worked today with Soledad Villar (NYU) to develop capstone projects for the masters program in the Center for Data Science. The Masters students do research projects, and we have lots of ideas about mashing up deep learning and astrophysics.

2019-08-13

comparing points to points

For a number of projects, my group has been trying to compare point sets to point sets, to determine transformations. Some contexts have been calibration (like photometric and astrometric calibration of images, where stars need to align, either on the sky or in magnitude space) and others have been in dynamics. Right now Suroor Gandhi (NYU), Adrian Price-Whelan (Flatiron), and I have been trying to find transformations that align phase-space structures (and especially the Snail) observed in different tracers: What transformation between tracers matches the phase-space structure? These projects are going by our code name MySpace.

Projects like these tend to have a pathology, however, related to a pathology that Robyn Sanderson (Flatiron) and I found in a different context in phase space: If you write down a naive objective for matching two point clouds, the optimal match often has one point cloud shrunk down to zero size and put on top of the densest location on the other point cloud! Indeed, Gandhi is finding this so we decided (today) to try symmetrizing the objective function to stop it. That is, don't just compare points A to points B, but also symmetrically compare points B to points A. Then (I hope) neither set can shrink to zero usefully. I hope this works! Now to make a symmetric objective function...

2019-08-12

likelihood function for a catalog

I spent my research time today writing in a paper Rix (MPIA) and I are preparing about selecting sources for a catalog or target selection. The fundamental story is that you need to make a likelihood function at the end of the day. And this, in turn, means that you need a tractable and relatively accurate selection function. This all took me down old paths I have traveled with Bovy (Toronto) and Foreman-Mackey (Flatiron).

In email correspondence, Foreman-Mackey reminded me of past correspondence with Loredo (Cornell), who disagrees with our work on these things for very technical reasons. His (very nice) explanation of his point is around equations (8) through (10) in this paper: It has to do with how to factorize a probability distribution for a collection of objects obtained in a censored, variable-rate Poisson process. But our historical view of this (and my restored view after a day of struggling) is that the form of the likelihood depends on fine details of how you believe the objects of study were selected for the catalog, or censored. If they were censored only by your detector, I think Loredo's form is correct. But if they were censored for physical reasons over which you have no dominion (for example a planet transit obscured by a tiny fluctuation in a star's brightness), the selection can come in to the likelihood function differently. That is, it depends on the causal chain involved in the source censoring.

2019-08-09

selection, EPRV, spirals

[I have been on travel of various kinds, mostly non-work, for almost two weeks, hence no posts!]

While on my travels, I wrote in my project about target selection for spectroscopic surveys (with Rix) and my project about information theory and extreme-precision radial-velocity measurement (with Bedell). I also discovered this nice paper on Cepheid stars in the disk, which is a highly relevant position-space complement to what Eilers and I have been doing in velocity space.