Today was the usual research-packed day at Flatiron. In the stars group meeting, Megan Bedell (UChicago) told us about her multi-epoch survey of Solar twins. Because they are twins, they have similar logg and Teff values, so she can get very precise differential abundances. Her goal is to understand the relationships between abundances and planets; she gave us mechanisms in which the stellar abundances could affect planet formation, mechanisms in which planet formation could affect stellar surface abundances, and common causes that could affect both. She has measured 20-ish detailed abundances at high precision in 88 stars with (because: multi-epoch) SNR 2000-ish!
Doug Finkbeiner (Harvard) and Stephen Portillo (Harvard) told us about probabilistic catalogs; a project they are doing that builds on work Brewer, Foreman-Mackey, and I did a few years ago. They find (like us) that a probabilistic catalog—a sampling of the posterior in catalog space—can find fainter sources reliably than any standard point-estimate catalog, even one built using crowded-field software. They use HST to deliver ground truth. They aren't going fully hierarchical; we discussed that in the meeting, and the relative merits of probabilistic catalogs and delivering an API to the likelihood function (my new baby).
Neven Caplar (ETHZ) went off-topic in the meeting to describe some results on the time-variability of AGN. Sensibly, he wants to use time-domain data to test accretion disk models. He is working with PTF data, which he had to recalibrate in a self-calibration (he even shouted out our uber-calibration of SDSS). He is computing structure functions (which look random-walk-like) and also doing inference in the context of CARMA models. He pointed out that there must be a long-term damping term in the covariance kernel, but no-one can see it, even with years of data. That's interesting; AGN really are like random walkers on very long timescales.
In the cosmology group meeting, Phil Bull (JPL) worked us through a probabilistic graphical model that replaces simple halo occupation models with something that is a bit more connected to what we think is going on with galaxy evolution. Importantly, it permits him to do large-scale structure experiments with multiple overlapping tracers from different surveys. Much of the discussion was about whether it is better to have a more sophisticated model that is more realistic, or whether it is better to have a simpler model that is more tractable. This is an important question in every data analysis and my answer is very different in different contexts.
Between these two meetings, Bedell and I worked out the simplest representation for our Avast model of stellar spectra and Bedell went off to implement it. She crushed it! She has code that can optimize a smooth model given a set of noisily measured different epochs, accounting for differences in throughput and radial velocity. Not everything is working—we need to diagnose the optimizer we are using (yes, optimization is always the hardest part of any of my projects)—but Bedell did in one afternoon more than I have got done in the last three months! Now we are in a state to make bound-saturating radial-velocity measurements and look for covariant spectral variations in an agnostic way. I couldn't have been more excited at the end of the day.
No comments:
Post a Comment