I had a long conversation this morning with Rix and Lisa Kaltenegger (MPIA) about how you would losslessly propagate observational noise in exoplanet observations into probabilistic judgements about habitability. We decided to take a principled (Bayesian, hierarchical) approach in setting up the problem and then to make approximations as necessary to make the calculation possible given current knowledge and technology. We worked through the problem in the form of a probabilistic graphical model, and then discussed how to translate that PGM into code. And then sample it.
2013-08-30
2013-08-29
reverberation mapping, modeling Kepler
In Galaxy Coffee this morning, William Bethune (Univ. J. Fourier) spoke about his project to look at reverberation mapping of a variable AGN at two-ish microns. He and his collaborators can do reverberation mapping on the dusty torus around the AGN, which was novel (to me). They can see temperature changes as well as brightness changes, which opens up new possibilities for physical modeling. The data are noisy and sparse, so the Bayesian methods of Brewer might make for better results.
In the afternoon, I worked on our white paper for the Kepler two-wheel call. All I have so far is an executive summary, no content! Unfortunately it is due in a few days.
2013-08-28
abundances (chemical and planetary)
At Milky Way group meeting I got all crazy about the fact that there are all these multi-element chemical abundance surveys starting, taking great spectra of giant stars, but there is no evidence (yet) that anyone can actually measure the detailed abundances in any giant stars, even given great data. I exhorted everyone to look at the APOGEE data, which are beautiful and plentiful and ought to be good enough to do this science. Any improvement in their ability to measure chemical abundances will be richly rewarded.
In the afternoon I spoke with Beth Biller (MPIA) and Ian Crossfield (MPIA) about possibly constraining the long-period planet distribution using single-transit objects in the Kepler data (that is, stellar light curves for which a single transit is observed and nothing else). This is an idea from the exoSAMSI meeting a few months ago. We decided that it would probably not be a good idea to do this with a single-transit catalog that doesn't also have a good estimate of completeness and purity and so on. That is, Biller and Crossfield were (rightly) suspicious when I said maybe we could just fit the completeness function hyper-parameters along with the population hyper-parameters! That said, they were both optimistic that this could work. I wrote to Meg Schwamb (Taipei) about her (possibly existing) single-transit catalog from PlanetHunters.
2013-08-27
Kepler
Most of my research time today went into herding the cats involved in my plan to submit a white paper to the Kepler two-wheel call.
2013-08-26
imaging survey strategy
I spent some time discussing with Stephanie Wachter (MPIA) the Euclid calibration strategy and survey requirements. I think that Euclid (and all surveys) should do something like the "strategy C" in the paper on survey strategy by Holmes et al. Indeed, I think that strategy C would not only lead to far better calibration of Euclid than any of their current survey strategy ideas, I think it would make for more uniform solid-angular coverage, better time-domain coverage, and greater robustness to spacecraft faults or issues.
2013-08-25
off the grid, thinking
I spent the weekend completely off the grid. I didn't even bring my computer or any device. That was a good idea, it turns out, even for doing research. I got in some thinking (and writing) on various projects: I sharpened up my argument (partially helped by conversations with various MPIA people last week) that you never really want to compute the Bayes evidence (fully marginalized likelihood). If it is a close call between two models, it is very prior-dependent and isn't the right calculation anyway (where's the utility?); if it isn't a close call, then you don't need all that machinery.
I worked out a well-posed form for the question "What fraction of Sun-like stars have Earth-like planets on year-ish orbits?". That question is not well-posed but there are various possible well-posed versions of it, and I think some of them might be answerable with extant Kepler data.
Along the same lines, I wrote up some kind of outline and division of responsibilities for our response to the Kepler call for white papers related to repurpose in the two-wheel era. I decided that our main point is about image modeling, even though we have many thoughts and many valuable things to say about target selection, field selection, cadence, and so on. When I get back to civilization I have to email everyone with marching orders to get this done.
Rix and I have a side project to find streams or kinematic substructures in Milky-Way stellar data of varying quality. It works by building a sampling of the possible integrals of motion for each star given the observations, as realistically as possible, and then finding consensus among different stars' samplings. I worked on scoping that project and adjusting its direction. I am hoping to be able to link up
stars in big Halo-star surveys into substructures.
2013-08-22
cosmography, dust mapping, null data, discrete optimization
In a very full day, I learned about quasar-absorption-line-based mapping of the density field in large volumes of the Universe from K. G. Lee (MPIA), I discussed non-parametric methods for inferring the three-dimensional dust map in the Milky Way from individual-star measurements with Richard Hanson (MPIA), I was impressed by work by Beth Biller (MPIA) that constrains the exoplanet population by using the fact (datum?) that there are zero detections in a large direct-detection experiment, and I helped Beta Lusso (MPIA) get her discrete optimization working for maximum-likelihood quasar SED fitting. On the latter, we nailed it (Lusso will submit the paper tomorrow of course) but before nailing it we had to do a lot of work choosing the set of models (discrete points) over which fitting occurred. This reminds me of two of my soap-box issues: (a) Construction of a likelihood function is as assumption-laden as any part of model fitting, and (b) we should be deciding which models to include in problems like this using hierarchical methods, not by fitting, judging, and trimming by hand. But I must say that doing the latter does help one develop intuition about the problem! If nothing else, Lusso and I are left with a hell of a lot of intuition.
2013-08-21
the bulge
In a low-research day, Melissa Ness (MPIA) led a discussion in the Milky Way group meeting about the Milky Way Bulge. She showed that it very clearly has an x-shape (peanut-shape or boxy shape) and that the x-shape is more visible in the higher metallicity stars. She also showed evidence from simulations that the higher metallicity stars are more represented in the x-shape because of the phase-space distribution they had when they were excited into the x-shape orbits by the bar; that is, the metallicity distribution in the bulge is set by the excitation mechanism as much as the properties of the star formation. One thing that was interesting to me about this is that the bulge is x-shaped and the orbits are also x-shaped. That means that maybe we could just "read off" the orbits from the three-dimensional distribution of stars. Ish. That's not often true in dynamical systems. Ness's data come from a sparse sampling of the sky, but her results are about big, continuous structures. It would be great to get complete (meaning spatially contiguous) coverage (meaning spectral observations) of giant stars all over the bulge!
2013-08-20
Save Kepler Day, insane imaging precision
Today was Save Kepler Day at Camp Hogg. Through a remarkable set of fortunate events, I had Barclay (Ames), Fergus (NYU), Foreman-Mackey (NYU), Harmeling (MPI-IS), Hirsch (UCL, MPI-IS), Lang (CMU), Montet (Caltech), and Schölkopf (MPI-IS) all working on different questions related to how might we make Kepler more useful in two-wheel mode. We are working towards putting in a white paper to the two-wheel call. The MPI-IS crew got all excited about causal methods, including independent components analysis, autoregressive models, and data-driven discriminative models. By the end of the day, Foreman-Mackey had pretty good evidence that the simplest auto-regressive models are not a good idea. The California crew worked on target selection and repurpose questions. Fergus started to fire up some (gasp) Deep Learning. Lang is driving the Tractor, of course, to generate realistic fake data and ask whether what we said yesterday is right: The loss of pointing precision is a curse (because the system is more variable) but also a blessing (because we get more independent information for system inference).
One thing about which I have been wringing hands for the last few weeks is the possibility that every pixel is different; not just in sensitivity (duh, that's the flat-field) but in shape or intra-pixel sensitivity map. That idea is scary, because it would mean that instead of having one number per pixel in the flat-field, we would have to have many numbers per pixel. One realization I had today is that there might be a multipole expansion available here: The lowest-order effects might appear as dipole and quadrupole terms; this expansion (if relevant) could make modeling much, much simpler
The reason all this matters to Kepler is that—when you are working at insane levels of precision (measured in ppm)—these intra-pixel effects could be the difference between success and failure. Very late in the day I asked Foreman-Mackey to think about these things. Not sure he is willing!
2013-08-19
imaging models; save Kepler
I arrived at the MPI-IS in Tübingen to spend two days talking about image modeling with Schölkopf, Harmeling, and Kuhlmann. A lot of what we are talking about is the possibility of saving Kepler, where our big idea is that we can recover lost precision (from loss of pointing accuracy) by modeling the images, but we also talked about radio interferometry. On the Kepler front, we discussed the past Kepler data, the precision requirements, and the problems we will have in modeling the images. One serious problem for us is that because Kepler got its precision in part by always putting the stars in the exact same places in the CCD every exposure, we don't get the kind of data we want for self-calibration of the detector and the PSF. That's bad. Of course, the precision of the whole system was thereby made very good. In two-wheel mode (the future), the inevitably larger drift of the stars relative to the CCD pixels will be a curse (because the system won't be perfectly stable and stationary) but also a blessing (because we will get the independent information we need to infer the calibration quantities).
On the radio-interferometry front, we discussed priors for image modeling, and also the needs of any possible "customers" for a new radio-interferometry image-construction method. We decided that among the biggest needs are uncertainty propagation and quantification of significance. These needs would be met by propagating noise, either by sampling or by developing approximate covariance-matrix representations. In addition, we need to give investigators ways to explore the sensitivities of results to priors. We came up with some first steps for Kuhlmann.
In the afternoon, I spoke about data analysis in astrophysics to a group of high-school students interested in machine learning.
2013-08-16
streams, streams, and more streams
It was stream day today. We started with Bovy giving an impromptu lecture on a generative model for tidal streams based on actions and angles. It is almost identical to a model that I have been working on with / for Johnston and Price-Whelan, but angle-space makes it possible for Bovy to do one marginalization integral analytically that I have to do numerically. That huge benefit comes at a cost of course; the analytic marginalization requires a particular prior on stream disruption rate, and the action–angle formalism requires integrable potentials. All that said, the marginalization might be valuable computationally. During Bovy's disquisition, Robyn Sanderson pointed out that some of the ideas he was presenting might be in the Helmi & White paper from back in the day.
After this, Sanderson and I worked on our action-space clustering note. Sanderson debugged the use of KL-Divergence as a method to put uncertainties on parameter estimates; I worked on abstract and introduction text. One question I am interested in (and where Sanderson and I disagree) is whether what we are doing will work also for kinematically hot structures (thick disk, very old streams) or only kinematically cold ones. Since the method is based on information theory (or predictive value) I have an intuition that it will work for just about any situation (though obviously it will get less constraining as the structures get hotter).
2013-08-15
baryons and dark matter
At MPIA Galaxy Coffee, Bovy talked about his work on pre-reionization cosmology: He has worked out the effect that velocity differences between baryons and dark matter (just after recombination) have on structure formation: On large scales, there are velocity offsets of the order of tens of km/s at z=1000. The offsets are spatially coherent over large scales but they affect most strongly the smallest dark-matter concentrations. Right now this work doesn't have a huge impact on the "substructure problem" but it might as we go to larger samples of even fainter satellite galaxies at larger Galactocentric distances. In question period there was interest in the possible impact on the Lyman-alpha forest. In the rest of the day, Sanderson (Groningen) and I kept working on action space, and Lusso (MPIA) and I continued working on fitting quasar SEDs.
2013-08-14
phase-space structure
Today Robyn Sanderson (Groningen) arrived at MPIA for three days of sprint on a paper about inferring the Milky Way Halo potential by optimizing the information (negative entropy) in the phase-space distribution function. This project involves the horror of transforming the observables to the actions (something I railed against a few days ago), but the method is promising: It permits identification of potentials given phase-space information without requiring that we identify related stars or specific structures at all. The method only requires that there be structure. And of course we are looking at how it degrades as the observational errors grow.
In addition to working on this, we attended the Rix-group Milky-Way group meeting, where (among other things), Wyn Evans (Cambridge) told us about using the apocenter information we have about the Sagittarius tidal stream to infer Halo potential parameters. He gets a low mass for the Milky Way (in accord with other stellar kinematic methods, but in conflict with Leo I's high velocity; is it bound?). I had a bit of a problem with how Evans and his collaborators join up the stream to the progenitor, but that may just be a detail. Hoping to learn more later this week.
2013-08-13
probabilistic consensus
If you have a population of objects, each of which has a true property x which is observed noisily, how do you infer the distribution of true x values? The answer is to go hierarchical, as we do in this paper. But it surprises some of my friends when I tell them that if you aren't going to go hierarchical, it is better to histogram the maximum-likelihood values than it is to (absolute abomination) add up the likelihood functions. Why? Because a maximum-likelihood value is noisy; the histogram gives you a noise-convolved distribution, but the likelihood function has really broad support; it gives you a doubly-convolved distribution! Which is all to say: Don't ever add up your likelihood functions!
When you look under the hood, what hierarchical inference is doing is looking for consensus among the likelihood functions; places where there is lots of consensus are places where the true distribution is likely to be large in amplitude. Rix had a very nice idea this weekend about finding consensus among likelihood functions without firing up the full hierarchical equipment. The use case is stream-finding in noisy Halo-star data sets. I wrote text in Rix's document on the subject this morning.
2013-08-12
type-II quasars
I have mentioned previously Lusso (MPIA) and Hennawi's argument that there ought to be lots of obscured quasars: The unobscured quasars show hot dust emission indicating that there are lines of sight that pretty-much must be blocked. Today, Lusso and I tried to find some of these by messing with WISE data on BOSS LRGs. That is, we looked for galaxies with such prodigious mid-infrared output that there must be some other source (think: obscured AGN) powering it. We found a couple probable type-II (obscured) quasars and a lot of other random stuff, including wrong SDSS-III redshifts, blazars, and even a star in Orion (yes, mis-classified as a galaxy by the pipeline). Needles in haystacks: Still hard to find! That is, these obscured quasars should be plentiful, but hard to find in any simple data set.
2013-08-09
interferometry, meet probability
Great morning chats with Jahnke (MPIA) about Euclid survey and calibration strategy and Kapala (MPIA) about comparing H-alpha emission to C+ emission in M31. After that, Malte Kuhlmann (MPI-IS) arrived to discuss generative probabilistic approaches to radio interferometry (think: replacing CLEAN). We discussed his compressed-sensing-inspired methods (which produce nicely regularized point estimates) and confronted him with some highly astronomical considerations: How do we create something that is not just righteous but also gets rightly adopted by a significant user base? And: How do we make it so that we can use the output of our code to more responsibly evaluate significance and more responsibly propagate uncertainty than what is currently industry-standard in radio astronomy? On the former, a key idea is that whatever we do has to fit into the standard radio-imaging workflow. On the latter, the key idea is that we need to exercise a justified likelihood function. I have a very good feeling about this project. Kuhlmann's background is in math and stats, which is a good background for bringing important new ideas into astrophysics. The day ended with a great talk by Watkins (MPIA) about omega Cen, the content of which also inspired a spirited discussion of globular clusters on the 17:48 bus.
2013-08-08
sampling in catalog space
In a low-research day, I gave a MPIA Galaxy Coffee talk on the sampling in catalog space paper with Brewer and Foreman-Mackey. I emphasized the big-picture issues of having probabilistic deblending results, understanding source populations below the plate limit, and treating catalogs like any other kind of inferential measurement. It is a great project with good results, but it was also computationally expensive, both in the sense of compute cycles, and in the sense of us (or really Brewer) having to know a large amount about high-end sampling methods (reversible jump meets nested sampling) and how to write efficient code. For the future, scaling is an interesting thing to think about: Could we scale up (possibly with a Gibbs or online approach) to a large data set?
2013-08-07
NRFG, day 3
Today was open-discussion day at Not Ready for Gaia. We put subjects on the board and then voted. To my great pleasure, the quantitative, probabilistic fitting of cold streams in the Milky Way made the cut. The most impressive thing that emerged (for me) when we discussed the relationship of what we should do according to the work of Sanders (Oxford) to what we did do in this paper is that we can describe both methods with a single, simple generative-model framework, with the Sanders approximation far better than ours. As I perhaps mentioned in a previous post, one interesting thing (to me) about the group of people in the room is that there was essentially no disagreement about how inference ought to be done. Probabilistic modeling has won. Generating the measurements has won. Perhaps not surprising, since it all more-or-less originates with Laplace working on the Solar System.
In a brief diatribe about computational complexity, I argued that the computational cost of doing dynamical modeling of the Milky Way will probably scale as the cube (or worse) of the number of stars: There is always computation that needs to be done for each star, the number of orbits or tori that need to be computed probably scales roughly as the number of data points, and our ambitions about the complexity (freedom) of the models we are fitting will also rise with the data size. This is just made up, but I doubt it is very far wrong.
2013-08-06
NRFG, day 2
It turns out we are still Not Ready for Gaia. Binney (Oxford) opened the day with a strong argument that we should be thinking in terms of action–angle variables, and also a spirited explanation of his torus methods that do their best to approximate any potential with an explicitly integrable potential built from a quasi-optimal foliation of tori. His argument was good. That said, I am very against doing science by transforming the data to action–angle variables and then asking questions there. The transformation (often called sloppily a "projection") is highly non-linear and wrecks intuition about uncertainty. Besides, it depends on having a good potential model, which (I thought) was the whole point! Now Binney is not making this mistake: He wants us to do inference in the space of the observations, but there are certainly projects that transform to action–angle variables and say "look at the structure!" but don't note that the structure might be totally gone if either errors get finite or else the potential model is wrong (and both problems are unavoidable, always).
After Binney, Bovy spoke about his work with Rix modeling mono-abundance populations and getting the mass density in the Milky Way disk as a function of radius (near the Solar Circle). It is a monster project, with beautiful results. It brought to fruition quite a few things, some of which I was involved in, years past.
2013-08-05
NRFG, day 1
Binney (Oxford) and Rix got their larger diasporae together in Heidelberg starting today to discuss all things Milky-Way-dynamical. A lot of the conversation is about inference with real data (which I love). The ultimate theme is that we don't know what to do with the Gaia data (or any other data, really), so we dubbed the meeting Not Ready For Gaia (hashtag #NRFG).
There were many more things that happened than can be comfortably typed in a blog post, but Schlafly (MPIA) gave a great talk on the three-d dust map he has been making with Green (CfA) and Finkbeiner (CfA), including absolutely dramatic comparisons with the (now old) SFD map made from IRAS data. Sales (Oxford) showed how informative spatial priors can be used to regularize or improve (in principle) three-d dust maps. Sanders (Oxford) showed that cold tidal streams do not align with orbits (a subject discussed frequently in this forum in the past). He did a great job (in part bashing Rix and my work with Koposov for its naïvete, or, as he so politely put it, "approximations"), despite the fact that Rix and I wouldn't shut up with comments and questions during his talk. Absolutely excellent stuff. I expected to spend time telling people to be more careful with the data, but both groups are super-probabilistic in their reasoning and trying to be principled with the data.
Late in the day I gave a Königstuhl Colloquium on the blackboard about MCMC. In principle it is going to be posted in video form on the web sometime soon.
2013-08-02
Gaussian Processes, disks
Mykytyn and Patel got a full Gaussian Process code for fitting quasar lightcurves together and working today. Not bad for a day's work. And (thanks to a genius insight from Foreman-Mackey) we have a model for multiple-band quasar data, so that we can model any bands or any combination of multiple bands, whether the different bands are taken simultaneously (as with SDSS) or with large time lags between them (as with PanSTARRS). We will implement the multi-bandiness next week, but I am still pretty damned impressed with the full construction of a GP system from scratch and working on real data in one day.
In the afternoon, Arjen van der Wel (MPIA) gave a very nice talk about star-forming galaxies over cosmic time. He can show, explicitly in the data by fitting the distribution of galaxy shapes, that most of the stars in the Universe do indeed form in disks. That's a fundamental result. He also has interesting results on the mean properties of disk growth by star formation.
2013-08-01
other people's problems
I spent four hours today talking with Gregory Green (CfA), Finkbeiner (CfA), and Schlafly (MPIA) about their Bayesian formalism for inferring a three-dimensional map of the dust in the Milky Way. I opened strong, saying what they were doing couldn't be right. By the end of the four hours, they pretty-much convinced me that what they are doing is right. They do some tricky stuff with what I learned at ExoSAMSI are "interim priors" and I am still working through the last bit of math. I also encouraged them to think about going to much smaller pixels (much higher angular and radial resolution); if they are all Bayesian, they shouldn't mind the spatial priors that will require. All that said, their results are awesome.
On that note, Nina Hernitschek (MPIA) spoke at Galaxy Coffee about Gaussian Process models for quasar light curves and the possibility of using them for reverberation mapping in the photometry alone. This model is a model not just at fine time resolution but literally infinite resolution in the time domain. It is non-parametric in that sense. In principle, the dust map could be also, although I admit that it would be a non-trivial project. In related news, Patel and Mykytyn spent the day working on baby steps towards building Gaussian Process models of multi-band quasar light curves.
Late in the day I continued yesterday's discussions of quasar SED fitting with Lusso (MPIA) and Hennawi. I was filled with ideas, but we more-or-less decided that Lusso's brute-force methods are fine, given the current sample sizes and scientific goals. Brute force grid-search optimization has the great advantage over all other optimization strategies that it is guaranteed to find your best point (at least on your grid). That's useful!