Today was my last full day at MPIA for 2015. Ness and I worked on her age paper, and made the list of final items that need to be completed before the paper can be submitted, first to the SDSS-IV Collaboration, and then to the journal. I also figured out a bunch of baby steps I can do on my own paper with the age catalog.
2015-08-28
2015-08-27
reionization
At MPIA Galaxy Coffee, K. G. Lee (MPIA) and Jose Oñorbe (MPIA) gave talks about the intergalactic medium. Lee spoke about reconstruction of the density field, and Oñorbe spoke about reionization. The conversations continued into lunch, where I spoke with the research group of Joe Hennawi (MPIA) about various problems in inferring things about the intergalactic medium and quasar spectra in situations where (a) it is easy to simulate the data but (b) there is no explicit likelihood function. I advocated likelihood-free inference or ABC (as it is often called), plus adaptive sampling. We also discussed model selection, and I advocated cross-validation.
In the afternoon, Ness and I continued code review and made decisions for final runs of The Cannon for our red-giant masses and ages paper.
2015-08-26
dust structures, code audit
At Milky Way group meeting, Eddie Schlafly (MPIA) showed beautiful results (combining PanSTARRS, APOGEE, 2MASS, and WISE data) on the dust extinction law in the Milky Way. He can see that some of the nearby dust structures have anomalous RV values (dust extinction law shapes). Some of these are previously unknown features; they only appear when you have maps of density and RV at the same time. Maybe he gets to name these new structures!
Late in the day, Ness and I audited her code that infers red-giant masses from APOGEE spectra. We found some issues with sigmas and variances and inverse variances. It gets challenging! One consideration is that you don't ever want to have infinities, so you want to use inverse variances (which become zero when data are missing). But on the other hand, you want to avoid singular or near-singular matrices (which happen when you have lots of vanishing inverse variances). So we settled on a consistent large value for sigma (and correspondingly small value for the inverse variance) that satisfies both issues for our problem.
2015-08-25
color maps, stellar ages, stellar multiples
At breakfast I told Morgan Fouesneau (MPIA) my desiderata for a set of matplotlib color maps: I want a map that indicates intensity (dark to bright, say), a map that indicates a sequential value (mass or metallicty or age, say), a map that indicates residuals away from zero that de-emphasizes the near-zero values, and a map that is the same but that emphasizes the near-zero values. I want the diverging maps to never hit pure white or pure black (indeed none of these maps should) because we always want to distinguish values from “no data”. And I want them to be good for people with standard color-blindness. But here's the hard part: I want all four of these colormaps to be drawn from the same general palette, so that a scientific paper that uses them will have a consistent visual feel.
Before lunch, Ness and I met with Marie Martig (MPIA) and Fouesneau to go through our stellar age results. Martig and Fouesneau are writing up a method to use carbon and nitrogen features to infer red-giant ages and masses. Ness and I are writing up our use of The Cannon to get red-giant ages and masses. It turns out that The Cannon (being a brain-dead data-driven model) has also chosen, internally, to use carbon and nitrogren indicators. This is a great endorsement of the Martig and Fouesneau method and project. Because they are using their prior beliefs about stellar spectroscopy better than we are, they ought to get more accurate results, but we haven't compared in detail yet.
Late in the day, Foreman-Mackey and I discussed K2 and Kepler projects. We discussed at length the relationship between stellar multiplicity and binary and trinary (and so on) populations inference. Has all this been done for stars, just like we are doing it for exoplanets? We also discussed candidate (transiting candidate) vetting, and the fact that you can't remove the non-astrophysics (systematics-induced) false positives unless you have a model for all the things that can happen.
2015-08-24
visualization, delta functions, interpretability
Late in the day, Rix, Ness, and I showed Ben Weiner (Arizona) the figures we have made for our paper on inferring red-giant masses and ages from APOGEE spectroscopy. He helped us think about changes we might make to the figures to bolster and make more clear the arguments.
I spent some of the day manipulating delta functions and mixtures of delta functions for my attempt to infer the star-formation history of the Milky Way. I learned (for the Nth time) that it is better to manipulate Gaussians than delta functions; delta functions are way too freaky! And, once again, thinking about things dimensionally (that is, in terms of units) is extremely valuable.
In the morning, Rix and I wrote to Andy Casey (Cambridge) regarding a proposal he made to use The Cannon and things we know about weak (that is, linearly responding) spectral lines to create a more interpretable or physically motivated version of our data-driven model, and maybe get detailed chemical abundances. Some of his ideas overlap what we are doing with Yuan-Sen Ting (Harvard). Unfortunately, there is no real way to benefit enormously from the flexibility of the data-driven model without also losing interpretability. The problem is that the training set can have arbitrary issues within it, and these become part of the model; if you can understand the training set so well that you can rule out these issues, then you don't need a data-driven model!
2015-08-19
stars on a train
[I caught a train to London today. What a civilized world we live in!] I started to write up my ideas about pairs of pairs of spectral twins.
2015-08-18
taking numerical derivatives; spectroscopic tagging
We looked at numerical derivatives of models of stellar spectroscopy, taken by Yuan-Sen Ting (Harvard). When you numerically differentiate someone else's computer code, you have to be careful! If you take a step size that is too large, the behavior won't be linear. If you take a step size that is too small, the differences won't be trustable (because the code has unknown convergence precision). So there should be a “sweet spot”. Ting's analysis doesn't clearly show such a sweet spot for his derivatives of the Kurucz stellar models. That's concerning.
Mid-afternoon I had an idea for a new project, related to the above, and also to The Cannon: If we have a pair of matched pairs (yes, four stars), all of which have similar Teff and logg, but two of which are from one open cluster and two of which are from another, we should be able to look at the precision of any possible abundance estimate, even without any model. That is, we can see at what signal-to-noise we can differentiate the pairs, or we can see how much more difference there is across clusters as compared to within clusters. This could be a baby step towards a fully data-driven, spectral-space chemical tagging method, something I have dreamed about before.
2015-08-17
information theory and stellar spectroscopy
[I am supposed to still be on vacation but we came back a day early.]
At the end of the day, Rix and I met with Yuan-Sen Ting (Harvard) and Ness to discuss the state of projects. Ting is modeling stellar spectra with linear combinations of model derivatives and has some extremely promising results (and some small bugs). We worked on diagnosis. One great thing about his project is that many of the results (on fake data, anyway) can be predicted exactly by methods of information theory. That is, he is close to having a testing framework for his scientific project! This relates to things CampHogg has discussed previously.
With Ness we went through all of the figures and discussed their relevance and necessity for her paper on measuring stellar masses and ages with The Cannon. Her current figures follow a very nice set of conventions, in which a progressive "grey" color map is used to indicate density (of, say, points in a scatter plot) but a sequential colored color map is used to indicate any other quantity (for example, mean stellar age in that location). The paper shows clearly that we can measure stellar masses and ages, and explains how. We made many small change requests.
2015-08-13
confusion in angle-spectrum space, and search
At Galaxy Coffee, Trevor Mendel (MPE) showed discussed resolved stellar populations as observed with the MUSE integral-field spectroscopy instrument. They extract spectra using spatial priors based on HST imaging, which is not unlike the forced photometry we did with WISE. This is obviously a good idea in confused fields. That led to a bus conversation with Mendel about the confusion limit as applied to spectroscopic (or multi-band imaging data). The confusion limit is a function (in this case) of the spectral diversity, as well as spatial resolution; indeed a band with low angular resolution but in which there is strong spectral diversity could help enormously with confusion. This is a fundamental astronomy point which (to my limited knowledge of the literature) may be unexplored. We also discussed what the data-analysis problems would look like if we had models of stellar spectra that we believed.
In the afternoon, Price-Whelan and I pair-coded a search algorithm for stellar structures that hew close to one orbit, given noisy phase-space measurements. We chased the problem for a while and then decided we have to suck it up and do some real engineering. Brute-force search is slow! Our usual mode is play first, profile later, but this problem is slow enough that we need to profile before playing. In particular, we have to apply some engineering know-how to the discretization of our model (computed) orbits and to our comparison of data to models (where we probably need a kd-tree or the like).
2015-08-12
the star-formation history, stream-finding, and more
[Off the grid for the last few days.]
In Milky Way group meeting, Xue (MPIA) talked about our Kinematic Consensus project, which is starting to show promise: We look for sets of stars with observable properties consistent with having the same orbital integrals of motion. It appears to group stars even when the trial potential (we use for orbit construction) is significantly different from the true potential.
At that same meeting, I described my hopes for modeling or measuring the Milky Way star-formation history, using stellar ages derived from stellar masses on the red-giant branch. This got Dalcanton (UW) asking about the point that the age distribution of red-giant stars is not the age distribution of the disk as a whole, since different stars get onto the red-giant branch at different times and stay on for different times. This led to the question: Why I don't model the mass distribution and then transform it into a star-formation history “in post” as it were? I like it! So I will re-code tomorrow. It is very good to talk out a project early rather than late!
In the middle of the day, Price-Whelan and I pair-coded some of his search for substructure, in a project called King Kong, which is a direct competitor to Kinematic Consensus (and inspired by work by Ibata at Strasbourg). Here we look for orbits that “catch” a large number of stars (within their uncertainties). We struggled with some elementary geometry, but prevailed!
Late in the day, Kopytova swung by and we looked at marginalizing out the calibration (or continuum) vectors in her inference of stellar parameters for substellar companions. It turns out that the linear algebra is trivial, along the lines of Foreman-Mackey and my re-formulation of our K2 project. We will pair-code the matrix inversion lemma tomorrow.
2015-08-07
in the space of the data!
Bird and I spent our last hours together in Heidelberg working out a plan and to-do list for his paper on the age–velocity relation in the Milky-Way disk. We planned tests and extensions, the scope of the minimal paper, and a visualization of the model. We worked out a way to express the model in the space of the raw-ish data, which is important: You can't assess what your model is doing in the space of the latent parameters; the data are the only things that exist (imho). That is, you have to project your model into the data space to assess where it is working and not working. And to decide whether you need to do more. That's old-school Roweis.
2015-08-06
Galaxy Coffee
At MPIA Galaxy Coffee, Rix talked about mono-abundance populations in the Milky Way, and Dan Weisz talked about combining photometric and spectroscopic observations to constrain the properties of distant, compact star clusters. In the first talk, Rix showed that at low abundances, the stellar populations want negative scale lengths: This is because they are like "donuts" or "rings" around the Milky Way: Low-abundance stars in the disk appear to form far out. In the discussion, we argued about how we could use his results to test models for the thickness of the disk and its dependence on stellar age and orbital radius.
In the second talk, Weisz showed that inclusion (and marginalization out) of a non-trivial calibration model makes the inferences from badly-calibrated spectroscopy much more accurate. They also permit learning about calibration (he showed that you learn correct things about calibration) and they permit principled combination of different data sets of different calibration veracity or trustability. All calibration data are wrong, so you really want to leave it free in everything you do; that's a nice challenge for the next generations of The Cannon.
2015-08-05
observing radial migration in the disk
At MPIA Milky Way group meeting, Rix talked about the recent paper by Sanders & Binney about extended distribution functions using a series of very simple, analytic prescriptions of various inputs. I objected to the paper: I don't see any reason to work with simulation-output-inspired simple functional forms when you have actual simulation outputs! There is no difference, intellectually or conceptually, between a piece of computer code executing tanh() and a similar piece of code executing output_of_simulation_h277() except that the latter, tied as it is to physical models, would be interpretable physically. But anyway. Rix pointed out that their prescription for radial migration could be useful for quantitative analysis of our age results. I agree!
Also at group meeting, and related to radial migration, there were discussions of results from Hernitschek, Sesar, and Inno results on variable stars in PanSTARRS. It looks likely that Inno will be able to extract a catalog of Cepheid stars in the disk, and (if we can convince someone to take data) we can get metallicities and radial velocities for them. We also discussed (momentarily) the point that we should be doing visualization, discovery, and astronomy with Hernitschek's immense catalog of variable stars in PanSTARRS: She has colors, magnitudes, and variability timescale and amplitude for every point source!
2015-08-04
model derivatives
Today Yuan-Sen Ting (Harvard) gave Ness and me derivatives of Kurucz-like models of APOGEE spectral data with respect to individual element abundances and effective temperature and gravity. Ness and I plotted these theoretical derivatives against “beliefs” obtained by The Cannon sbout these same dependencies. They agree remarkably well! But in detail they disagree; it remains to figure out what parts of the disagreement are noise or real results. We also showed (based on the derivative vectors) that the precisions with which APOGEE can detect or measure individual abundances are very, very high.
In other news, I found and fixed bugs in my star-formation history code, but none of the bug fixes led to reasonable results. Time to write some tests.
2015-08-03
likelihood functions for stellar age relations
Bird (Vanderbilt) and I pair-coded a likelihood function for the age–velocity relation in the Milky Way Disk. After that, I coded up a likelihood function for the star-formation history in the Milky Way disk, given very noisy age estimates (from The Cannon). By the end of the day, Bird was successfully MCMC sampling his parameters, and my inference was going massively unstable. I am sure I have a simple bug.
Price-Whelan, Rix, and I discussed a brute-force search for stellar streams in stellar catalogs called KingKong. It samples randomly in orbit space for a trial or toy potential and asks, for each orbit, how many stars have observations that are consistent with being on that orbit. The method is simple. But it is all about implementation details.