2017-03-27

identical, statistically speaking

My research activity today was to re-write, from scratch (well, I really started yesterday) my document on how you tell whether two noisily measured objects are identical. This is an old and solved problem! But I am writing the answer in astronomer-friendly form, with a few astronomy-related twists. I have no idea whether this is a paper, a section of a paper, or something else. My re-write was caused by the algebra I learned from Leistedt, and the customers (so to speak) of the document are Rix, Ness, and Hawkins, all of whom are thinking about finding identical pairs of stars.

2017-03-24

how to write an April Fools' paper

I had a great visit to the University of Toronto Department of Astronomy and Astrophysics (and Dunlap Institute) today. I had great conversations about scintillometry (new word?) and the future of likelihood functions and component separation in the CMB. I also discussed pairwise velocity differences in cosmology, and probabilistic supernova classification. There is lots going on. I gave my talk on The Cannon, in which I was perhaps way too pessimistic about chemical tagging!

Early in the day, I ate Toronto-style (no, not Montreal-style) bagels with Dustin Lang (Toronto) and discussed many of the things we like to discuss, like finding very faint outer-Solar-System objects in all the data Lang wrangles, like the differences between accuracy and precision, and even how to define accuracy in astrophysics, and like April Fools' papers, which have to meet four criteria:

  1. conceptually interesting inference
  2. extremely challenging computation
  3. no long-term scientific value to the specific results found
  4. non-irrelevant mention of April 1 in abstract
It is a brutal set of requirements but we have met them two times. I think this year is out (because of criterion 2), but maybe 2018?

2017-03-23

math with Gaussians

My one piece of research news today was an email exchange with Boris Leistedt (NYU) in which he completely took me to school on math with Gaussians. My intuition (expressed this week) that there was an easier way to do all the operations I was doing was right! But everything else I was doing was not wrong but wrong-headed. Anyways, this should simplify some things right away. The key observation is that a product of Gaussians can be transformed into another product of Gaussians, in another basis, trivially. More soon!

2017-03-22

circular reasoning, continuity of globular clusters

In Stars group meeting, Lauren Anderson (Flatiron) showed our toy example that demonstrates why our method for de-noising the Gaia TGAS data works. That led to some useful conversation that might help us explain our project better. I didn't take all the notes I should have! One idea that came up is that if there are two populations, one only seen at very low signal-to-noise, then that second population can easily get pulled in to the first. Another is the question of the circularity of the reasoning. Technically, our reasoning is circular, but it wouldn't be if we marginalized out the hyper-parameters (that is, the parameters of our color–magnitude diagram).

Also in the Stars meeting, Ruth Angus (Columbia) suggested how we might responsibly look for the differences in exoplanet populations with stellar age. And Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) described their very successful observing run to follow up the comoving stellar pairs. Preliminary analyses suggest that many of the pairs (which we found only with transverse information) are truly comoving.

In Cosmology group meeting, Jeremy Tinker discussed the possibility of using halo-occupation-like approaches to determine how the globular cluster populations of galaxies form and evolve. This led to a complicated and long discussion, with many ideas and issues arising. I do think that various simple scenarios could be ruled out, making use of some kind of continuity argument (with sources and sinks, of course).

I spent some time hidden away working on multiplying and integrating Gaussians. I am doing lots of algebra, completing squares. I have the tiniest suspicion that there is an easier way, or that all of the math I am doing has a simple answer at the end, that I could have seen before starting?

2017-03-21

half-pixel issues; building our own Gibbs sampler

First thing in the morning I met with Steven Mohammed (Columbia) and Dun Wang (NYU) to discuss GALEX calibration and imaging projects. Wang has a very clever astrometric calibration of the satellite, built by cross-correlating photons with the positions of known stars. This astrometric calibration depends on properties of the photons for complicated reasons that relate to the detector technology on board the spacecraft. Mohammed finds, in an end-to-end test of Wang's images, that there might be half-pixel issues in our calibration. We came up with methods for tracking that down.

Late in the day, I met with Ruth Angus (Columbia) to discuss the engineering in her project to combine all age information (and self-calibrate all methods). We discussed how to make a baby test where we can do the sampling with technology we are good at, before we write a brand-new Gibbs sampler from scratch. Why, you might ask, would any normal person write a Gibbs sampler from scratch when there are so many good packages out there? Because you always learn a lot by doing it! If our home-built Gibbs doesn't work well, we will adopt a package.

2017-03-20

statistics questions

I spent time today writing in the method section of the Anderson et al paper. I realized in writing it that we have been thinking about our model of the color–magnitude diagram as being a prior on the distance or parallax. But it isn't really, it is a prior on the color and magnitude, which for a given noisy, observed star, becomes a prior on the parallax. We will compute these implicit priors explicitly (it is a different prior for every star) for our paper output. We have to describe this all patiently and well!

At some point during the day, Jo Bovy (Toronto) asked a very simple question about statistics: Why does re-sampling the data (given presumed-known Gaussian noise variances in the data space) and re-fitting deliver samples of the fit parameters that span the same uncertainty distribution as the likelihood function would imply? This is only true for linear fitting, of course, but why is it true (and no, I don't mean what is the mathematical formula!)? My view is that this is (sort-of) a coincidence rather than a result, especially since it (to my mind) confuses the likelihood and the posterior. But it is an oddly deep question.

2017-03-16

a prior on the CMD isn't a prior on distance, exactly

Today my research time was spent writing in the paper by Lauren Anderson (Flatiron) about the TGAS color–magnitude diagram. I think of it as being a probabilistic inference in which we put a prior on stellar distances and then infer the distance. But that isn't correct! It is an inference in which we put a prior on the color–magnitude diagram, and then, given noisy color and (apparent) magnitude information, this turns into an (effective, implicit) prior on distance. This Duh! moment led to some changes to the method section!

2017-03-15

what's in an astronomical catalog?

The stars group meeting today wandered into dangerous territory, because it got me on my soap box! The points of discussion were: Are there biases in the Gaia TGAS parallaxes? and How could we use proper motions responsibly to constrain stellar parallaxes? Keith Hawkins (Columbia) is working a bit on the former, and I am thinking of writing something short with Boris Leistedt (NYU) on the latter.

The reason it got me on my soap-box is a huge set of issues about whether catalogs should deliver likelihood or posterior information. My view—and (I think) the view of the Gaia DPAC—is that the TGAS measurements and uncertainties are parameters of a parameterized model of the likelihood function. They are not parameters of a posterior, nor the output of any Bayesian inference. If they were outputs of a Bayesian inference, they could not be used in hierarchical models or other kinds of subsequent inferences without a factoring out of the Gaia-team prior.

This view (and this issue) has implications for what we are doing with our (Liestedt, Hawkins, Anderson) models of the color–magnitude diagram. If we output posterior information, we have to also output prior information for our stuff to be used by normals, down-stream. Even with such output, the results are hard to use correctly. We have various papers, but they are hard to read!

One comment is that, if the Gaia TGAS contains likelihood information, then the right way to consider its possible biases or systematic errors is to build a better model of the likelihood function, given their outputs. That is, the systematics should be created to be adjustments to the likelihood function, not posterior outputs, if at all possible.

Another comment is that negative parallaxes make sense for a likelihood function, but not (really) for a posterior pdf. Usually a sensible prior will rule out negative parallaxes! But a sensible likelihood function will permit them. The fact that the Gaia catalogs will have negative parallaxes is related to the fact that it is better to give likelihood information. This all has huge implications for people (like me, like Portillo at Harvard, like Lang at Toronto) who are thinking about making probabilistic catalogs. It's a big, subtle, and complex deal.

2017-03-14

snow day

[Today was a NYC snow day, with schools and NYU closed, and Flatiron on a short day.] I made use of my incarceration at home writing in the nascent paper about the TGAS color–magnitude diagram with Lauren Anderson (Flatiron). And doing lots of other non-research things.

2017-03-13

toy problem

Lauren Anderson (Flatiron) and I met early to discuss a toy model that would elucidate our color–magnitude diagram model project. Context is: We want to write a section called “Why the heck does this work?” in our paper. We came up with a model so simple, I was able to implement it during the drinking of one coffee. It is, of course, a straight-line fit (with intrinsic width, then used to de-noise the data we started with).

planning a paper sprint, completing a square

Lauren Anderson (Flatiron) are going to sprint this week on her paper on the noise-deconvolved color–magnitude diagram from the overlap of Gaia TGAS, 2MASS, and the PanSTARRS 3-d dust map. We started the day by making a long to-do list for the week, that could end in submission of the paper. My first job is to write down the data model for the data release we will do with the paper.

At lunch time I got distracted by my project to find a better metric than chi-squared to determine whether two noisily-observed objects (think: stellar spectra or detailed stellar abundance vectors) are identical or indistinguishable, statistically. The math involved completing a huge square (in linear-algebra space) twice. Yes, twice. And then the result is—in a common limit—exactly chi-squared! So my intuition is justified, and I know where it will under-perform.

2017-03-10

the Milky Way halo

At the NYU Astro Seminar, Ana Bonaca (Harvard) gave a great talk, about trying to understand the dynamics and origin of the Milky Way halo. She has a plausible argument that the higher-metallicity halo stars are the halo stars that formed in situ and migrated out, while the lower-metallicity stars were accreted. If this holds up, I think it will probably test a lot of things about the Galaxy's formation, history, and dark-matter distribution. She also talked about stream fitting to see the dark-matter component.

On that note, we started a repo for a paper on the information theory of cold stellar streams. We re-scoped the paper around information rather than the LMC and other peculiarities of the Local Group. Very late in the day I drafted a title and abstract. This is how I start most projects: I need to be able to write a title and abstract to know that we have sufficient scope for a paper.

2017-03-09

The Cannon and APOGEE

I discussed some more the Cramér-Rao bound (or Fisher-matrix) computations on cold stellar streams being performed by Ana Bonaca (Harvard). We discussed how things change as we increase the numbers of parameters, and designed some possible figures for a possible paper.

I had a long phone call with Andy Casey (Monash) about The Cannon, which is being run inside APOGEE2 to deliver parameters in a supplemental table in data release 14. We discussed issues of flagging stars that are far from the training set. This might get strange in high dimensions.

In further APOGEE2 and The Cannon news, I dropped an email on the mailing lists about the radial-velocity measurements that Jason Cao (NYU) has been making for me and Adrian Price-Whelan (Princeton). His RV values look much better than the pipeline defaults, which is perhaps not surprising: The pipeline uses some cross-correlation templates, while Cao uses a very high-quality synthetic spectrum from The Cannon. This email led to some useful discussion about other work that has been done along these lines within the survey.

2017-03-08

does the Milky Way disk have spiral structure?

At stars group meeting, David Spergel (Flatiron) was tasked with convincing us (and Price-Whelan and I are skeptics!) that the Milky Way really does have spiral arms. His best evidence came from infrared emission in the Galactic disk plane, but he brought together a lot of relevant evidence, and I am closer to being convinced than ever before. As my loyal reader knows, I think we ought to be able to see the arms in any (good) 3-d dust map. So, what gives? That got Boris Leistedt (NYU), Keith Hawkins (Columbia), and me thinking about whether we can do this now, with things we have in-hand.

Also at group meeting, Semyeong Oh (Princeton) showed a large group-of-groups she has found by linking together co-moving pairs into connected components by friends-of-friends. It is rotating with the disk but at a strange angle. Is it an accreted satellite? That explanation is unlikely, but if it turns out to be true, OMG. She is off to get spectroscopy next week, though John Brewer (Yale) pointed out that he might have some of the stars already in his survey.

2017-03-07

finding the dark matter with streams

Today was a cold-stream science day. Ana Bonaca (Harvard) computed derivatives today of stream properties with respect to a few gravitational-potential parameters, holding the present-day position and orientation of the stream fixed. This permits computation of the Cramér-Rao bound on any inference or estimate of those parameters. We sketched out some ideas about what a paper along these lines would look like. We can identify the most valuable streams, the streams most sensitive to particular potential parameters, the best combinations of streams to fit simultaneously, and the best new measurements to make of existing streams.

Separately from this, I had a phone conversation with Adrian Price-Whelan (Princeton) about the point of doing stream-fitting. It is clear (from Bonaca's work) that fitting streams in toy potentials is giving us way-under-estimated error bars. This means that we have to add a lot more potential flexibility to get more accurate results. We debated the value of things like basis-function expansions, given that these are still in the regime of toy (but highly parameterized toy) models. We are currently agnostic about whether stream fitting is really going to reveal the detailed properties of the Milky Way's dark-matter halo. That is, for example, the properties that might lead to changes in what we think is the dark-matter particle.