stellar spectra with issues, probabilistic frequency differences

At group meeting, Kopytova described why she wants to measure C/O ratios for very cool stars and hot planets—it is to look at where they formed in the proto-planetary disk. We discussed the (frequently arising) point that the spectra have bad continuum normalization (or, equivalently, bad calibration) and so it is hard to compare the models to the data at the precision of the data. This problem is not easily solved; many investigators "do the same thing" to the data and the models to match the continuum normalizations. However, these continuum procedures are usually signal-to-noise-dependendent; models are rarely at the same signal-to-noise as the data! Anyway, we proposed a simple plan for Kopytova, very similar to Foreman-Mackey's K2 work: We will instantiate many nuisance parameters (to cover calibration issues), infer them simultaneously, and marginalize them out. Similar stuff has crossed this space associated with the names Cushing and Czekala!

NYU CDS MSDS student Bryan Ball told us about his parameterized model of a periodogram, and his attempts to fit it using likelihood optimization. He is well on the way to having a probabilistic approach to obtaining the "large frequency difference" of great importance in asteroseismology. At the end of the meeting, Foreman-Mackey showed us an awesome demo of the method we are calling "PCP" that models a data matrix as a sum of a low-rank (PCA-like) matrix and a sparse (mostly zero) matrix. The latter picks up the outliers, and handles the missing data.

1 comment:

  1. "parameterized model of a periodogram, and his attempts to fit it using likelihood optimization"

    I thought people had been doing that kind of thing for a decade? Using the whittle likelihood.