I read (or skimmed, really) some classic papers on the star-formation history of the Milky Way, in preparation for re-asking this question with APOGEE data this summer. Papers I skimmed included Prantzos & Silk, which infers the SFH from (mainly) abundance distributions and Gizis, Reid, and Hawley, which infers it from M-dwarf chromospheric activity. I also wrote myself a list of all the possible ways one might infer the SFH. I realized that not all of the ways I can think of have actually been executed. So now I have a whole set of projects to pitch!
2015-06-30
2015-06-29
yet more K2 proposal
How can a 6-page proposal take more than six days to write? Also signed off on Dun Wang's paper on his pixel-level self-calibration of the Kepler Mission. Submit!
2015-06-26
more K2 proposal
In my research time today, I kept on the K2 proposal. I made the argument that we want K2 to point less well in Campaign 9 than it has in previous Campaigns, because we want the crowded field (which is in the Bulge of the Milky Way) to move significantly relative to the pixel grid. We need that redundancy (or heterogeneity?) for self-calibration. I hope that—if we get this proposal funded—we will also get influence over the spacecraft attitude management!
2015-06-25
K2 flat-field
I spent my research time today working on my proposal for K2 Campaign 9. The proposal is to self-calibrate to get the flat-field, which is critical for crowded-field photometry (even if done via image differencing).
2015-06-24
radical self-calibration
At group meeting, Fadely showed us plots that show that he can do what I call “radical” self-calibration with realistic (simulated) data from fields of stars. This is the kind of calibration where we figure out the flat-field and PSF simultaneously by insisting that the images we have could have been generated by point sources convolved with some pixel-convolved PSF. He also showed how the results degrade as our knowledge of the PSF gets wrong. We can withstand percent-ish problems with our PSF model, but we can't withstand tens-of-percent. That's interesting, and useful. I feel like we are pretty safe for our HST WFC3 calibration project though: We know the PSF very well and have a great first guess at the flat too.
At the same meeting, we bitched about the Astronomers' Telegram, looked at an outburst from a black-hole source, argued about mapping the sky with Fermi GBM, and looked at K2 data on a Sanchis-Ojeida planet. Oh and right after group meeting, Malz demonstrated to me conclusively that our Bayesian hierarchical inference of the redshift distribution—given probabilistic photometric redshifts—will work!
2015-06-23
cavities
Total fail, although I spent time supporting New York's dental-health infrastructure.
2015-06-22
Ekta Patel
The only research part of the day was a great lunch with ex-Camp-Hogger (is it possible to be "ex" from CampHogg?) Ekta Patel (Arizona), where she talked to us about research and graduate curriculum at Arizona. She is doing awesome research (on the LMC and Local Group satellites) right off the bat and loving the research focus of the course schedule at Arizona. I couldn't agree more! I pitched my ideas that sub-pixel flat issues could in principle be messing with the incredibly small proper motion measurements for the Local-Group satellites.
2015-06-19
K2 proposals
Today was #K2proposalSprint day. At group meeting, MJ gave us a review of a new paper on probabilistic approaches to weak lensing, which made many harsh (but useful) approximations. Then we pitched our K2 proposals and started writing. Price-Whelan pitched a proposal to find extragalactic exoplanets! One of the K2 fields touches the Sagittarius stream and therefor will contain (at Kepler sensitivity) some good red giants that might be planet hosts delivered by an accreted galaxy!. Fed Bianco pitched a proposal to do lucky imaging (and improve lucky-imaging pipelines) to follow up microlensing events in the K2 Campaign 9 field (which is a bulge-imaging project aimed at microlensing). I pitched a proposal to determine the PSF and flat-field in Campaign 9, where the field will be so crowded that, for one, the flat-field and PSF will be infer-able in the data, and, for another, the two things will need to be known at good precision to do any useful data analysis. We then spent the day working, but I have to admit I didn't get very far!
2015-06-18
2015-06-17
Conroy's stars, and kernel learning
We had a star-studded group meeting today. It kicked off with Charlie Conroy (Harvard) talking about some of his recent projects. In one, he looks at the time dependence of pixel brightnesses in M87, because the long-period variables in the stellar population lead to long-period variations in brightness. In principle these variations are a function of stellar population age and density. He showed data from a huge but under-exploited HST program. In another project, he is working on varying unknown physical properties of atomic transitions within a stellar atmosphere model to make an interpretable but data-driven model for stellar atmospheres. This is a great project, but involves coding up all one's prior beliefs about what can vary and how and in what ways. That's a very complicated prior pdf! In another, he discusses the limits of chemical tagging (with Yuan-Sen Ting, MPIA, with whom I will be working this summer. In this project, they find that even a small change in the precision with which chemical abundances can be measured might have a huge impact on any tagging project.
In the second half of group meeting, Andrew Gordon Wilson (CMU) spoke about his new work on kernel learning, in which he optimizes the likelihood of a Gaussian Process in which the kernel is represented as a mixture of Gaussians in spectral space. He has some amazing demos which show that the kernel learning gets a very different covariance matrix than the empirical covariance, which is highly relevant to modern cosmology (where the empirical covariance is all we ever use!). He also talked about some important philosophy about model complexity: For every simple model that works well (in a Bayesian sense), there are other, more complicated models that will always work better (also in that same Bayesian sense). This plays well with my disagreement with all the Mackay-like arguments that Bayes encapsulates Occam's Razor. It just doesn't!
2015-06-16
model all the stars (again)
First thing in the morning I spoke with Scott Singer (NYU) about confirming or checking the ultra-short-period exoplanets shown to use by Sanchis-Ojeida. He is new to it all, so we talked about where the data are, how they are indexed and named, and how to plot them.
At the very end of the day, I pitched the project of making probabilistic models that can generate stellar variability (consistent with observations) to our visitor Andrew Gordon Wilson (CMU). The idea is to use all the light curves we have ever seen to build a family of non-trivial kernels (what's called “kernel learning” in the machine-learning literature) and a prior over those, so that we can model (in my sense, which involves a likelihood function) any stellar variability with a bespoke Gaussian Process. This is the key missing piece in our plans to take Kepler (or Kepler-like) light curves and separate them into the component generated by stellar variability, the component generated by spacecraft variability, and the component generated by any transiting companion: We need a good model of what stars can do!
2015-06-15
stellar masses and ages, huge determinants
There is growing evidence that Ness, Rix, and I can determine stellar masses (and therefore ages) on the red-giant branch using The Cannon, trained on asteroseismology results from Kepler and spectra from APOGEE. I am stoked. There is some debate among our team about what is going to be the spectral signature that is delivering the mass/age information. Last week, Ness showed me that at least some of the information is coming from very weak emission lines. The Cannon has discovered chromospheric activity! Today, Ness and I worked on visualizing the spectral regions that are delivering label information. We came up with a pretty novel plot, one draft example of which is below.
Also today Andrew Gordon Wilson (CMU) showed up to school us on Gaussian Processes. He said that he would have no trouble computing the determinant of the covariance matrix for the CMB real-space likelihood function even for Planck-sized data; he said the determinant would take seconds! His methods involve a regular grid of inducing points and interpolation to the irregular grid. So we threw down the challenge; now Foreman-Mackey and I have to get the problem set up for him. We'll see!
2015-06-12
radical self-calibration
At group meeting, Fadely showed us evidence that the radical self-calibration that we are executing for the HST WFC3 instrument can work: He showed that if you know the PSF—but nothing about any individual exposure—you can indeed infer the flat-field to some precision. Also and related, Vakili showed that he is getting pretty good estimates of the PSF in real HST WFC3 imaging. So we are getting close to going end-to-end on this project. I call this self-calibration “radical” because it doesn't rely on stars being observed more than once; it only relies on stable enough (or dense enough) imaging that the PSF can be accurately inferred. It works by asking what flat-field is required in order to generate good predictions for the data. One thing we are hoping: The quality of the results might depend more on the center of the PSF (the easy part) than the outskirts (the hard part); we are trying to understand that now. The long-term goal of this project is to save the asses of projects that took their data in violation of the principles for self-calibration.
2015-06-11
priorities
In a low-research day I got some quality time in at the end with Foreman-Mackey, reviewing the open threads and unfinished papers. There are five high priority unfinished projects, all related to exoplanets (meaning: we aren't even counting the other stuff). Uh oh.
2015-06-10
intra-pixel issues for astrometry; fiber collisions
[No posts for a while; a short mental-health staycation.]
At group meeting, at Fadely's request, I summarized the #LGAstat meeting. The group was interested in the comparisons of data with simulations, but thought my ideas about re-doing LMC infall inference were dumb! I will keep refining that idea. One question: Do sub-pixel flat issues screw up astrometric measurements at HST resolution? The local-group proper motions are very small, and the instrument is not hugely sampled, so intrapixel issues in the flat could be adding variance to the proper-motion estimator. Added variance generally leads to added kinetic energy and that leads to inferences that the satellite is at pericenter, which is what is being inferred!
Chang Hoon Hahn (NYU) told us about fiber collisions in SDSS-III and IV large-scale structure projects. This problem—That some close pairs of galaxies in the sample only obtain one redshift between them—sounds simple, but is far from it, when two-point (or higher order) statistics are of interest. He showed us the advances they have made by treating the missing redshift as being drawn from a fairly realistic pdf in redshift space. It looks promising, but there are still advances to be made. One interesting thing (that Phil Marshall should love) is that his current correction scheme is like a marginalization over the missing redshift, but marginalizing with a sampling approximation and using only one single sample as the integral approximation!
2015-06-04
#LGAstat, day 4
On the last day of #LGAstat, Ivezic (UW) spoke about LSST and his ideas about where we aren't yet ready. The project is funded to generate the data, but not to do all the science with the data, so a lot of the things we want from LSST we will have to do ourselves and figure out ourselves. The cool thing is that the project has a "Level 3" products plan that will make it possible for people outside the project team to contribute code and catalogs and measurements and outputs to the pipelines. Ivezic made an interesting suggestion: Even if you aren't doing insane Bayesian inferences, it is important to understand Bayesian reasoning (I really think he meant "probabilistic reasoning") in order to be clear about (and communicate clearly about) your data-analysis assumptions. I agree!
Hargis (Haverford) and Bechtol (UWM) and Sand (Texas Tech) talked about dwarf galaxy detections in various data sets and expected numbers and so on. Bechtol's methods are probabilistic mixture models and are performing incredibly well: The DES has already found many new ultra-faint dwarf companions to the Milky Way.
Wetzel (Caltech) and Tollerud (Yale) gave nice talks that continued themes from earlier in the week about comparing simulations to data: They are importance-sampling (with the likelihood function) prior samples (from n-body simulations) to understand the infall times of Milky-Way satellites. They are both data and theory starved (so I think their samplings aren't converged in any sense) but they get really informative posterior information about infall times. This method (importance-sampling of n-body simulation samples) is looking incredibly productive as a new technique. Props to Busha and Marshall and others who pioneered this!
I had to miss a few talks because of work stuff, including an interesting contribution by Walker (CMU) that looked absolutely great (about probabilistic stellar parameter estimation). As usual, the above summary is very unfair and incomplete! Thanks to Loebman (Michigan) and Nidever (Michigan) for a great meeting.
2015-06-03
#LGAstat, day 3
Today was a great day at #LGAstat. Here is a personal, non-exhaustive set of highlights:
Roth (UCL) gave a great talk about understanding galaxy or galaxy-group properties as a function of cosmological initial conditions. She uses a framework in which she can adjust the initial conditions draw (to, say, have a certain total density, or a ratio of density in one part relative to another, or etc) and see how the simulation results depend on the changes to the initial conditions. Her method permits her to move the ICs smoothly, but also ensure that they remain quantitatively comprehensible as a draw from the true IC prior. She is using this framework to ask causal questions about outcomes for galaxies. The long-term goal is to understand the ICs from with the Local Group (and other groups) formed.
Besla (Arizona) and Olsen (NOAO) spoke about the LMC and SMC, with Besla concentrating on arguments about infall and orbit, and Olsen on arguments about stellar and gas dynamics and the Magellanic stream. Besla's talk was full of rich astrophysical (rather than strictly statistical) arguments, and Olsen brought an actual large data set, which was delivered to every participant by USB key!
Kravstov (Chicago) gave a beautiful talk about the timing argument for dark matter, and its update via importance sampling of simulation-based prior samples of local-group analogs. The timing argument is incredibly powerful and surprisingly accurate, he finds. This, combined with the Besla arguments, suggested a whole host of new projects to be doing with simulations and data.
There were so many great talks today, I can't even think to mention them all, but I should shout out VanderPlas (UW), who told us not to be afraid about having more parameters than data, and also is finding RR Lyrae stars automatically with clever generalizations of the periodogram. Foreman-Mackey told the crowd not to do 1/Vmax weighting of incomplete data (and what they should do instead). Farr (Chicago) blew us away with a new, very simple ensemble sampler, that could be merged beautifully with emcee. Martin (Strasbourg) took us to school with his mixture-modeling of the entire halo of M31, and Lang did more of similar for the PHAT project's (overwhelmingly large) astrometric solution. Martin made a nice point about the comparison of M31 with simulations: The data show a lot of substructure relative to realistic stellar halo simulations for galaxies at similar mass. This led to some very useful discussion.
2015-06-02
#LGAstat, day 2
Today was day two of the ego-boosting Local-Group Astrostatistics meeting (#LGAstat). It was another great day of talks, some (personal, idiosyncratic) highlights of which follow:
Bird (Vanderbilt) convinced us (or nearly so) that there might be remnants of galaxy evolution encoded in the Milky Way disk (even despite radial mixing and so on). He pointed to evidence that disks form "upside down" (oh the horror of that): At high redshift, disk star formation is dynamically hotter than at low redshift. Relatedly, Nidever (Michigan) showed us a bimodality in disk-star metallicities somewhat unlike what Zolotov described for the halo yesterday. It appears everywhere, but with the two modes in different ratios as a function of location in the disk.
My colleague Anna Ho (MPIA) described The Cannon in detail, and impressed the audience. She is building the version of the code that is for public release and building into survey pipelines. She was asked some questions that get at two big issues with our approach: What do we do if the training set is not "like" the test set? and How do we know that when we are determining label X we aren't just learning a relationship between label X and other labels that are physically unrelated but covariant in the training set? On the former, we can't beat reality, but The Cannon does contain a realistic noise model, so we don't need the training and test sets to be the same in signal-to-noise. On the latter, all we can do is look for measures of statistical independence in our predictions. However, I have a deep-seated but unarticulated fear that we could get hosed by a training set with certain kinds of label degeneracies. More to think about there.
Bovy talked about the structure of the Milky Way in abundance slices, and also how to analyze incomplete catalogs. That is, he talked about the likelihood function and inference in the presence of bad incompleteness.
Schlafly (MPIA) and Green (CfA) showed us results from the three-dimensional Milky Way dust map they have built from the PanSTARRS data. Incredibly beautiful! The day ended with Milky Way bulge talks. Ness showed and interpreted Lang's beautiful WISE image of the X-shaped bulge, which has appeared (to date) only on Twitter (tm)! It appears that one side of the X is closer (and larger in angular extent) than the other, which is beautifully consistent with the expectations from that orbit structure.
2015-06-01
#LGAstat, day 1
Today was the first day of the Local Group Astrostatistics meeting at Ann Arbor, organized by Loebman (Michigan) and Nidever (Michigan). The meeting is remarkable for it's great turnout, very young age profile, and design (it is both an interdisciplinary meeting and a set of pedagogical workshops). It is also a huge boost to my ego, since there are scheduled talks by four of my ex-students, and many of my close collaborators! And I had nothing to do with any of that; the organizers recused me from any decisions related to any of my people.
After my intro talk, the Stream Team members Price-Whelan, Bonaca, and Sanderson all spoke about projects my loyal reader knows well. Zolotov (yes, another CampHogg camper), Deason (UCSC), Slater (Michigan), Bell (Michigan), Gomez (MPIA), and Vivas (CTIO) all gave talks about galaxy halos, touching on issues of abundances, substructure, formation history, and comparison to theory. Zolotov predicts a chemical bimodality in the halo at high metallicities. This prediction is very general, but hasn't (I think) been seen observationally. We should find that!