2014-07-17

coffee, objectives in calibration

At MPIA Galaxy Coffee, Schmidt (UCSB) showed resolved spectroscopy of some highly magnified high-redshift galaxies to look for spatial variations of metallicity. Beautiful data! He also talked about the photon density at high redshift and the implications for reionization. McConnell (Hawaii) spoke about the need for more good black-hole mass determinations. He (inadvertently perhaps) showed that the BH-sigma (velocity dispersion) relation could easily have zero scatter, when you consider the uncertainties in both directions and the possibility that the sigma uncertainties are under-estimated. In general it is very hard to get a sigma on a sigma! Filed away for future thinking.

In the afternoon, I spoke with Wang, Schölkopf, Foreman-Mackey about Kepler calibration. We tentatively decided to make exoplanet discovery our primary objective. An objective is required, because we can set our "hyper-parameters" (our choices about model and model complexity) to optimize anything from exoplanet discovery to exoplanet characterization to stellar rotation determination. I am still worried about the flexibility of our models.

2014-07-16

spiral structure in the Milky Way, K2

In Milky Way group meeting, Bovy showed results from a red-clump-star catalog derived from SDSS-III APOGEE and Schlafly showed extinction-corrected stellar maps sliced by distance from PanSTARRS. Both of these data sets plausibly contain good evidence of the spiral arms in stellar density, but neither of the first authors were willing to show me that evidence! I couldn't convince Bovy to help me out, but I think Schlafly might make the plot I want, which is the density as a function of Galactic X and Y for stars at low Z, divided by a smooth Galaxy model.

In a phone call late in the day, Ian Crossfield (Arizona) and Erik Petigura (Berkeley) and I discussed K2 data. I (perhaps unwisely) agreed to see if I can photometer the data, given pixel files.

2014-07-15

exoplanet photometry, crossing the streams

Nick Cowan (Northwestern, Amherst) was in town today to give a seminar about exoplanet thermodynamics and climate. He showed nice results inferring the temperature distribution on the surfaces of hot jupiters and the same for degraded data on the Earth. He spent a lot of time talking about thermostats and carbon and water cycles on the Earth and Earth-like planets. In the morning I discussed my OWL photometry with him, and we proposed to try it on his massive amounts of Spitzer IRAC data on transiting exoplanets.

After lunch, a group of the willing (Rix, Bovy, Sesar, Price-Whelan, myself) discussed stream fitting in the Milky Way. We decided to fit multiple streams simultaneously, starting with Orphan, GD-1, and Palomar 5. The first step is to gather the data. Price-Whelan and I also have to modify our method so that it can take the heterogeneous kinds of data in the GD-1 data set.

2014-07-14

centroiding and searching

I spoke with Vakili about centroiding stars. We are trying to finally complete a project started by Bovy ages ago to compare the best-possible centroiding of stars with a three-by-three pixel hack related to what is done in the SDSS pipelines. Vakili hit this issue because if you don't have good centroids, you can't get a good point-spread function model. Well, actually I shouldn't say "can't", because you can but then you need to make the centroids a part of the model that you learn along with the point-spread function. That may still happen, but along the way we are going to write up an analysis of the hack and also the Right Thing To Do.

Foreman-Mackey, Fadely, Hattori, and I discussed Hattori's search for exoplanets in the Kepler data. The idea is to build up a simple system based on simple components and then swap in more sophisticated components as we need them. We discussed a bit the question of "search scalar"—that is, what we compute as our objective function in the search. There is a likelihood function involved, but, as we often say in #CampHogg: Probabilistic inference is good at giving you probabilities; it doesn't tell you what to do. Search is a decision problem.

2014-07-12

Gaia RVS spectra

As I mentioned a few days ago, Mark Cropper (UCL) brought up the question of how to infer clean, high-resolution spectra from the Gaia RVS data, which are afflicted by distortions caused by charge-transfer inefficiency. I spent my small amount of work time today drafting a document with my position, which is that if you can forward-model the CTE (which they can) then spectral extraction is not hard.

2014-07-10

GaiaCal, day four

Tne theme of this short meeting has been that things we thought were real astrophysical parameters just might not be. Soderblom continued along this path when he showed that it is impossible to measure effective temperature to better than something on the order of 100 K. That limits the precision of model comparisons, even in the absence of modeling uncertainties (which are currently large). I argued that perhaps we should just give up on these ersatz "physical" parameters and go to predictions of observables. I was shouted down.

Finkbeiner showed unreal comparisons between SDSS and PanSTARRS photometry, with each survey synthesizing the other. He can show with enormous confidence which survey is responsible for which photometric residuals, just by projecting the residuals onto the sky or into the focal planes of the two instruments. This is an interesting kind of "causal inference". It is incredibly convincing; I must discuss it with Schölkopf next month.

2014-07-09

GaiaCal, day three

It was a half-day at the meeting today. Highlights included a talk by Bergemann about stellar models, which (perhaps unwittingly) demonstrated that the effective temperature, spectrum, and color of a star are not unique predictions of a model. That is, all of them are time-dependent; we really need to prediction distributions, not values. In the questioning, Davies pointed out that the variability could be used as a predictor of the depth of the photospheric convection zone or turnover scale. We discussed that later; resolving to think more about it.

Creevey convinced me that asteroseismology might be the One True Path to reliable calibration of astrophysical determinations of stellar densities, internal structure, and ages. An asteroseismology mission with the same scope as Gaia would change everything forever! In the mean time, asteroseismology can be used to calibrate the stars we most care about.

At the end of the sessions, I spoke, emphasizing the need to preserve and transmit likelihood functions, echoing comments by Juric that this is a goal of the planning in the LSST software group.

2014-07-08

GaiaCal, day two

A lot of the second day of #GaiaCal was about "benchmark stars". Indeed, my concerns about "truth" from yesterday got more severe: I don't think there is any possibility of knowing the "true" value of Fe/H or anything else for a star, so we should probably figure out ways of doing our science without making reference to such inaccessible things. One approach is to define the results of a certain set of procedures on a certain set of stars to be "truth" and then warp all models and methods to reproduce those truths. That's worth thinking about. During the day's proceedings, Blomme and Worley showed some incredible spectra and spectral results on the benchmarks, including abundance measurements for some 25 elements.

Along different lines, Cropper gave an astounding talk about the Gaia RVS spectral data reduction pipeline. It was astounding because it involves a complicated, empirical, self-calibrated model of charge-transfer efficiency and radiation damage, which is continuously updated as the mission proceeds. It also includes a spectral calibration (wavelength solution, astrometry) that is fit simultaneously with all spectral extractions in a complete self-calibration. Cropper asked the implicit question in his talk: "How do we extract CTE-undamaged spectra from CTE-damaged data?". I think I know how to answer that; resolved to write something about it.

Liu showed amazing empirical calibrations of the LAMOST spectra using SEGUE parameters and support vector machines. He has done a great job of tranferring the one system onto the other data, which gives hope for the benchmark plans that are emerging. Bovy gave a nice talk about ages (birth dates), chemical abundances, and orbital actions serving as stellar "tags" which can then be understood statistically and generatively.

2014-07-07

GaiaCal, day one

Today was the first day of the Astrophysical calibration of Gaia and other surveys meeting at Ringberg Castle in Tegernsee, Germany. The meeting is about methods for combining data to produce good astrophyiscal parameters from current and future surveys. There were many great talks today, but highlights were the following:

Andrae showed work on simulated data to demonstrate that the final Gaia Catalog will have good parameters (effective temperature, surface gravities, and so on) for stars. He showed that the differences between models is much larger than any precision of the data, and that therefore no models are expected to be "good fits" in any sense. He showed that they can recalibrate or adjust the input spectral models to force them to agree with the data, and that this seems to work extremely well.

At the end of a talk by Korn, Fouesneau commented that every model incorporated into the Gaia pipelines ought to include (if it has it) predictions for wavelengths outside the Gaia bandpasses. This is because then the results of fitting to Gaia data can be used to make predictions in other surveys. This comment resonated with me, because in the talks today there was a lot of talk about the "true" values of astrophysical parameters, but since these latent values are inaccessible to direct observation, we need other ways to assess credibility.

In between talks, Bovy made a nice point, which is that if Gaia wants end users to be able to compute likelihoods (and not just get point estimates or best-fit values) for astrophysical parameters, they should project all models onto their various systems (the G magnitude system, the BP/RP low-resolution spectral system, and the RVS high-resolution spectral system). This would permit end users to re-compute likelihoods for all data for all models.

2014-07-04

web cams and James Bradley

Over lunch, Markus Pössel (MPIA) mentioned that he can measure the sidereal day very accurately, using a fish-eye or wide-field web cam pointed at the sky. This led us to a discussion of whether it would be possible to repeat Bradley's experiments of the 1700s that measured stellar aberration, precession of the Earth's axis, and nutation. Pössel had the very nice realization that you don't have to specifically identify any individual stars in any images to do this experiment; you can just do cross-correlations of multi-pixel time series. That's brilliant! We decided to discuss again later this month along with a possible (high school) student researcher.

Before that, Roberto Decarli (MPIA) and I discussed various projects. The most interesting is whether or how you can "stack data" (combine information from many images or many parts of an image) but in interferometric imaging data. Decarli has shown that you can do this stacking in the fourier space rather than in the image space. That's excellent, because the noise properties of the data are (conceivably) known there, but never understood properly in the image space. I gave him my usual advice, which is to replace the stacking with some kind of linear fit or regression: Stacking in bins is like linear fitting but under hard assumptions about the noise model and properties of the sources. We agreed to test some ideas.

2014-07-03

Gaia status, Kepler calibration

Great conversations today with Brewer about sampling and inference, Knut Jahnke and Stefanie Wachter (MPIA) and Doug Finkbeiner (CfA) about Euclid calibration, and Jennifer Hill (NYU) about data science. But that's not all:

In the morning, Coryn Bailer-Jones (MPIA) gave a status update on the Gaia mission, which was launched in December. It is performing to spec in almost all respects. Bailer-Jones gave us straight talk on three issues they now face: The scattered light (from sources not in the field of view) is much higher than expected, reducing the magnitude limits for all accuracies and capabilities. There is something (probably water) affecting the total photometric throughput. It looks like this can be mitigated with occasional thermal cycling of the optics. The "fundamental angle" between the two telescopes seems to be varying with time with a larger amplitude than expected. This can be modeled, so long as it is understood properly. I think I speak for the entire astronomical community when I say that we can't wait for early data releases and wish the Gaia Collaboration the best of luck in addressing these challenges.

In the afternoon, Dun Wang, Foreman-Mackey, Schölkopf, and I worked through Wang's results on Kepler data-driven calibration. I am pretty excited about this project: Wang has shown that when we "train" the model on data not near (in time) to an injected transit, the data recalibrated with the model produces unbiased (on average) inferences about the transit. We assigned Wang next steps.

2014-07-02

cheap and dirty chemical abundances

It was a packed research day, with great conversations about data analysis and inference with Rix, Dalcanton, Fouesneau, Price-Whelan, Schölkopf, and others. I have so many things to do this summer!

The highlight conversation of the day was with Rix and Melissa Ness (MPIA) about the possibility that we might be able to build data-driven metallicity indicators or indices for APOGEE spectra by performing regressions of the data against known metallicities and other meta-data (confounders, if you will). We came up with a baseline plan and talked through the iterated linear algebra the project requires. The baseline project has strong overlap with things we worked on with Conroy & Choi this Spring. The idea is to have the (noisily) labeled data decide what operators on the data are the best predictors or estimators of chemical abundances by building an empirical predictive model of the labels.

2014-07-01

what is science?

It was a relatively low-research day, although I did check in with So Hattori and Dun Wang on their projects. I had a short conversation with Hennawi about next projects with SDSS-III and SDSS-IV spectroscopy. (Today is the last day of SDSS-III observing; SDSS-IV starts tomorrow!) We talked about Hennawi's Princeton-esque philosophy that great science happens when there are precise measurements that can be made and precise theoretical predictions for those measurements. This is a high bar for astrophysics: Many research areas either have no precise measurements, or no precise predictions, or neither. Interesting to think about projects through this lens. I am not particularly endorsing it, of course, since lots of science happens through discovery and description too.

2014-06-30

arrival in HD

I arrived in Heidelberg for my annual stay at MPIA today. I had a brief conversation with Rix about Milky Way streams, in preparation for Price-Whelan's arrival tomorrow. I also had a brief conversation with Finkbeiner (CfA) about PanSTARRS calibration. He can show that things are good at the milli-magnitude level, but I think he ought to be able to say even more precise things, given the number of stars and the regularities. In the end, most of his precision tests involve comparison to the SDSS-I imaging data, so that is what really limits the precision of his tests.

2014-06-27

over-fitting

To test our pixel-level model that is designed to self-calibrate the Kepler data, we had Dun Wang insert signals into a raw Kepler pixel lightcurve and then see if when we self-calibrate, do we fit it out or do we preserve it? That is, does linear fitting reduce the amplitudes of signals we care about or bias our results? The answer is a resounding yes. Even though a Kepler quarter has some 4000 data points, if we fit a pixel lightcurve with a linear model with more than a few dozen predictor pixels from other stars, the linear prediction will bias or over-fit the signal we care about. We spent some time in group meeting trying to understand how this could be: It indicates that linear fitting is crazy powerful. Wang's next job is to look at a train-and-test framework in which we only use time points far from the time points of interest to train the model. Our prediction is that this will protect us from the over-fitting. But I have learned the hard way that when fits get hundreds of degrees of freedom, crazy shit happens.