2017-10-20

stellar age–velocity relation

Jonathan Bird (Vandy) and I spent the morning working together on his paper on the age–velocity relationship in the Milky-Way disk. He has absolutely beautiful results, from APOGEE red-clump stars and Gaia DR1 transverse kinematics. The thing that is new is that (thanks Martig and Ness) he has actually useful age estimates for many hundreds of stars. And we will have the same for tens of thousands in the overlap with Gaia DR2. Indeed, we commented in the paper that SDSS-V will make this possible at scale. The great thing about the ages is that even with hundreds of stars, we get a comparable measure of the age–velocity relation to studies that involved orders of magnitude more stars.

We discussed the final presentation in the paper. We worked through the figures and drew a simple graphical model to illustrate the project. We then went, very carefully, through the assumptions of the project, so we can state them explicitly at the outset of our methods section, and then use them to structure the discussion at the end. It's a fun intellectual exercise to go through these assumptions carefully; somehow you only understand a project substantially after it is finished!

2017-10-19

self-calibration of stellar abundances

I spent the day at Vanderbilt, where I gave a talk and had many valuable conversations. Some were about data science: Andreas Berlind (Vanderbilt) is chairing a committee to propose a model for data science at Vanderbilt. We discussed the details that have been important at NYU.

One impressive project I learned about today was Hypatia, a compendium of all detailed stellar abundance measurements (and relevant housekeeping data) in the literature. Over dinner, Natalie Hinkel (Vanderbilt) and I discussed the possibility that this catalog could be used for some kind of self-calibration of all abundance measurements. That's an interesting idea, and connects to things I have discussed over the years with Andy Casey (Monash).

2017-10-18

self-calibrating pulsar arrays, and much more

I had a great conversation with Chiara Mingarelli (Flatiron) and Ellie Schwab (AMNH) today about pulsar-timing arrays and gravitational-wave sources. We are developing some ideas about self-calibration of the arrays, such that we might be able to simultanously search for coherent sources (that is: not just stochastic backgrounds) and also precisely determine the distances to the individual pulsars to many digits of accuracy!. It is futuristic stuff, and there are lots of ways it might fail badly, but if I am right that the self-calibration of the arrays is possible, it would make the arrays a few to tens of times more sensitive to sources! We started with Mingarelli assigning us some reading homework.

In the Stars group meeting, we had a productive discussion led by Megan Bedell (Flatiron), Andrew Mann (Columbia), and John Brewer (Yale) about things learned at the recent #KnowThyStar conference. There are some new uses of machine learning and data-driven models that I might need to spend some time criticizing! And it appears that there are some serious discrepancies between asteroseismic scaling relations for stellar radii and interferometric measurements. Not bigger than those expected by the stellar experts, apparently, but much bigger than assumed by some of the exoplanet community.

Prior to that, in our weekly Gaia DR2 prep working session, we discussed the use of proper motion as a distance indicator in a post-reduced-proper-motion world. That is: The assumptions underlying reduced proper motion are not great, and will be strongly violated in the DR2 data set. So let's replace it with a much better thing!

Adrian Price-Whelan (Princeton) showed some incredible properties of (flowing from beautiful design of) the astropy coordinates package. Damn!

2017-10-17

writing projects

Coming off my personal success of (finally) getting a paper on the arXiv yesterday (check the footnote on the cover page), I worked through two projects that are close to being writeable or finishable. The first is a paper with Stephen Feeney (Flatiron) on the Lutz-Kelker correction, when to use it (never) and what it is (a correction from ML to MAP). The second is a document I wrote many months ago about finding similar or identical objects in noisy data. After I read through both, I got daunted by the work that needs to happen! So I borked. I love my job! But writing is definitely hard.

2017-10-16

discovery! submission!

It was an important day for physics: The LIGO/VIRGO collaboration and a huge group of astronomical observational facilities and teams announced the discovery of a neutron-star–neutron-star binary inspiral. It has all the properties it needs to have to be the source of r-process elements, as the theorists have been telling us it would. Incredible. And a huge win for everyone involved. Lots of questions remain (for me, anyway) about the 2-s delay between GW and EM, and about the confidence with which we can say we are seeing the r process!

It was also an unusual day for me: After working a long session on the weekend, Dan Foreman-Mackey (Flatiron) and I finished our pedagogical document about MCMC sampling. I ended the day by posting it to arXiv and submitting it (although this seems insane) to a special issue of the ApJ. I don't write many first-author publications, so this was a very, very good day.

2017-10-13

calibration of ZTF; interpolation

I am loving the Friday-morning parallel working sessions in my office. I am not sure that anyone else is getting anything out of them! Today Anna Ho (Caltech) and I discussed things in my work on calibration and data-driven models (two extremely closely related subjects) that might be of use to the ZTF and SEDM projects going on at Caltech.

Late in the morning, an argument broke out about using deep learning to interpolate model grids. Many projects are doing this, and it is interesting (and odd) to me that you would choose a hard-to-control deep network when you could use an easy-to-control function space (like a Gaussian Process, stationary or non-stationary). But the deep-learning toothpaste is hard to put back into the tube! That said, it does have its uses. One of my medium-term goals is to write something about what those uses are.

2017-10-12

age-velocity; finishing

I had a great, long call with Jonathan Bird (Vandy) to discuss his nearly-finished paper on the age–velocity relation of stars in the Gaia DR1 data. We discussed the addition of an old, hot population, in addition to the population that shows the age–velocity relation. That's a good idea, and accords with our beliefs, hence even gooder.

I spent the rest of my research time today working through the text of Dan Foreman-Mackey (Flatiron) and my MCMC tutorial. We are trying to finish it this week (after five-ish years)!

2017-10-11

WDs in Gaia, M33, M stars, and more

In our weekly parallel-working Gaia DR2 prep meeting, two very good ideas came up. The first is to look for substructure in the white-dwarf sequence and see if it can be interpreted in terms of binarity. This is interesting for two reasons. The first is that unresolved WD binaries should be the progenitors of Type Ia supernovae. The second is that they might be formed by a different evolutionary channel than the single WDs and therefore be odd in interesting ways. The second idea was to focus on giant stars in the halo, and look for substructure in 3+2-dimensional space. The idea is: If we can get giant distances accurately enough (and maybe we can, with a model like this), we ought to see the substructure in the Gaia data alone; that is: No radial velocities necessary. Of course we will have radial velocities (and chemistry) for a lot of the stuff.

In the stars group meeting, many interesting things happened: Anna Ho (Caltech) spoke about time-domain projects just starting at Caltech. They sure do have overwhelming force. But there are interesting calibration issues. She has accidentally found many (very bright!) flaring M stars, which is interesting. Ekta Patel (Arizona) talked about how M33 gets its outer morphology. Her claim is that it is not caused by its interaction with M31. If she's right, she makes predictions about dark-matter substructure around M33! Emily Stanford (Columbia) showed us measurements of stellar densities from exoplanet transits that are comparable to asteroseismology in precision. Not as good, but close! And different.

In the afternoon I worked on GALEX imaging with Dun Wang (NYU), Steven Mohammed (Columbia), and David Schiminovich (Columbia). We discussed how to release our images and sensitivity maps such that they can be responsibly used by the community. And Andrina Nicola (ETH) spoke about combining many cosmological surveys responsibly into coherent cosmological constraints. The problem is non-trivial when the surveys overlap volumetrically..

2017-10-10

a day at MIT

I spent the day today at MIT, to give a seminar. I had great conversations all day! Just a few highlights: Rob Simcoe and I discussed spectroscopic data reduction and my EPRV plans. He agreed that, in the long run, the radial-velocity measurements should be made in the space of the two-d pixel array, not extracted spectra. Anna Frebel and I discussed r-process stars, r-process elements, and chemical-abundance substructure in the Galaxy Halo. We discussed the immense amount of low-hanging fruit coming with Gaia DR2. I had lunch with the students, where I learned a lot about research going on in the Department. In particular Keaton Burns had interesting things to say about the applicability of spectral methods in solving fluid equations in some contexts. On the train up, I worked on the theoretical limits of self-calibration: What is the Cramér–Rao bound for flat-field components given a self-calibration program? This, for Euclid.

2017-10-09

Euclid and MCMC

I did some work on the (NYC) long weekend on two projects. In the first, I built some code to make possible observing strategies for the in-flight self-calibration program for ESA Euclid. Stephanie Wachter (MPIA) contacted me to discuss strategies and metrics for self-calibration quality. I wrote code, but realized that I ought to be able to deliver a sensible metric for deciding on dither strategy. This all relates to this old paper.

On Monday I discussed our nearly-finished MCMC paper with Dan Foreman-Mackey (Flatiron) and we decided to finish it for submission to the AAS Journals. I spent time working through the current draft and reformatting it for submission. There is lots to do, but maybe I can complete it this coming week?

2017-10-06

dust-hidden supernovae

In my weekly parallel-hacking, I re-learned how to use kplr with Elisabeth Andersson (NYU).

This was followed by a nice talk by Mansi Kasliwal (Caltech) about the overwhelming force on time-domain astronomy being implemented by her and others at Caltech. One of their projects will be imaging more than 3000 square degrees an hour! There isn't enough solid angle on the sky for them. She is finding lots of crazy transients that are intermediate in luminosity between supernovae and novae, and she doesn't know what they are. Also she may be finding the (long expected) fully-obscured supernovae. If she has found them, she may be doubling the observed supernova rates in nearby galaxies. Great stuff.

The day ended with lightning talks at the CCPP, with faculty introducing themselves to the new graduate students.

2017-10-05

uncertainty propagation

I started the day with a long discussion with Ana Bonaca (Harvard) about how to propagate uncertainties in Galactic gravitational potential parameters into some visualization about what we (in that context) know about the acceleration field. In principle, the acceleration field is more directly constrained (by our dynamical systems) than the potential. What we want (and it is ill-posed) is some visualization of what we know and don't know. Oddly, this conversation is a conversation about linear algebra above all else. We both admitted to each other on the call that we are both learning a lot of math in this project!

[My day ended early because: NYC Comic Con!]

2017-10-04

Gaia and exoplanets

At our weekly Gaia DR2 prep workshop, a bunch of good ideas emerged from Megan Bedell (Flatiron) about exoplanet and star science. Actually, some of the best ideas could be done right now, before DR2! These include looking at our already-known co-moving pairs of stars for examples with short-cadence Kepler data or known planetary systems. There is also lots to do once DR2 does come out. In this same workshop, David Spergel (Flatiron) summarized the work that the Gaia team has done to build a simulated universe in which to test and understand it's observations. These are useful for trying out projects in advance of the data release.

In the afternoon, everyone at the Flatiron CCA, at all levels, gave 2-minute, 1-slide lightning talks. It was great! There were many themes across the talks, including inference, fundamental physics, and fluid dynamics. On the first topic: There is no shortage of people at Flatiron who are thinking about how we might do better at learning from the data we have.

2017-10-03

systematizing surprise; taking logs

I had a substantial conversation with Kate Storey-Fisher (NYU) about possible anomaly-search projects in cosmology. The idea is to systematize the search for anomalies, and thereby get some control over the many-hypotheses issues. And also spin-off things around generating high-quality statistics (data compressions) for various purposes. We talked about the structure of the problem, and also what are the kinds of limited domains in which we could start. There is also a literature search we need to be doing.

I also made a Jupyter notebook for Megan Bedell (Flatiron), demonstrating that there is a bias when you naively take the log of your data and average the logs, instead of averaging the data. This bias is there even when you aren't averaging; in principle you ought to correct any model you make of the log of data for this effect, or at least when you transform from linear space to log or back again. Oh wait: This is only relevant if you are not also transforming the noise model appropriately! Obviously you should transform everything self-consistently! In this case we have nearly-Gaussian noise in the linear space (because physics) and we want to treat the noise in the log space as also linear (because computational tractability). Fortunately we are working with very high signal-to-noise data, so these biases are small.

2017-10-02

exploration vs exploitation

I met with Lauren Anderson (Flatiron) first-thing to figure out how we can munge our hacky #GaiaSprint projects into real and cutting-edge measurements of the Milky Way. We looked at the VVV infrared survey because it ought to be better than 2MASS for mapping the inner disk and bulge. We looked at using SDSS photometry to map the halo. On the latter, the dust modeling is far simpler, because for distant stars, the dust is just a screen, not an interspersed three-dimensional field. We also discussed the ever-present issue for a postdoc (or any scientist): How much time should you spend exploiting things you already know, and how much exploring new things you want to learn?

In the morning I also discussed the construction of (sparse) interpolation operators and their derivatives with Megan Bedell (Flatiron).

At lunch, Yacine Ali-Haimoud (NYU) gave a great brown-bag talk on the possibility that black holes make up the dark matter. He showed that there are various different bounds, all of which depend on rich astrophysical models. In the end, constraints from small-scale clustering rule it out (he thinks). Matt Kleban (NYU) and I argued that the primordial black holes could easily be formed in some kind of glass that has way sub-Poisson local power. Not sure if that's true!

2017-09-30

LIGO noise correlations

I spent some weekend science time reading this paper on LIGO noise that claims that the time delays in the LIGO detections (between the Louisiana and Washington sites) are seen in the noise too—that is, that the time delays or coincidence aspects of LIGO detections are suspect. I don't understand the paper completely, but they show plots (Figure 3) that show very strong phase–frequency relationships in data that are supposed to be noise-dominated. That's strange; if there are strong phase–frequency relationships, then there are almost always visible structures in real space. (To see this, imagine what happens as you modify the zero of time: The phases wind up!) Indeed, it is the phases that encode real-space structure. I don't have an opinion on the bigger question yet, but I would like to have seen the real-space structures creating the phase–frequency correlations they show.

2017-09-29

the life stories of counter-rotating galaxies

Today was the third experiment with Friday-morning parallel working in my office. It is like a hack week spread out over months! The idea is to work in the same place and build community. During the session, I worked through a multi-linear model for steallar spectra and tellurics with Bedell, based on conversations with Foreman-Mackey earlier in the week. I also worked through a method for generating realistic and self-consistent p(z) functions for fake-data experiments with Malz. This is a non-trivial problem: It is hard to generate realistic fake data, it is even harder to generate realistic posterior PDFs that might come out of a probabilistic set of analyses of those data.

Just before lunch, Tjitske Starkenburg (Flatiron) gave the NYU Astro Seminar. She mainly talked about counter-rotating galaxies. She took the unusual approach of following up, in the simulations she has done, some typical examples (where the stars rotate opposite to the gas) and figure out their individual histories (of accretion and merging and movement in the large-scale structure). Late in the day, she and I returned to these subjects to figure out if there might be ways to read a galaxy's individual cosmological-context history off of its present-day observable properties. That's a holy grail of galaxy evolution.

2017-09-28

what's the point of direct-detection experiments?

In the morning I spoke with Ana Bonaca (Harvard) and Chris Ick (NYU) about their projects. Bonaca is looking at multipole expansions of the Milky Way potential from an information-theory (what can we know?) point of view. We are working out how to visualize and test her output. Ick is performing Bayesian inference on a quasi-periodic model for Solar flares. He needs to figure out how to take his output and make a reliable claim about a flare being quasi-periodic (or not).

Rouven Essig (Stonybrook) gave a nice Physics Colloquium about direct detection of dark matter. He is developing strong limits on dark matter that might interact with leptons. The nice thing is that such a detection would be just as important for the light sector (new physics) as for the dark sector. He gave a good overview of the direct-detection methods. After the talk, we discussed the challenge of deciding what to do as non-detections roll in. This is not unlike the issues facing accelerator physics and cosmology: If the model is just what we currently think, then all we are doing is adding precision. The nice thing about cosmology experiments is that even if we don't find new cosmological physics, we usually discover and measure all sorts of other things. Not so true with direct-detection experiments.

2017-09-27

Gaia, EPRV, photons

In our Gaia DR2 prep workshop, Stephen Feeney (Flatiron) led a discussion on the Lutz–Kelker correction to parallaxes, and when we should and shouldn't use it. He began by re-phrasing the original LK paper in terms of modern language about likelihoods and posteriors. Once you put it in modern language, it becomes clear that you should (almost) never use these kinds of corrections. It is especially wrong to use them in the context of Cepheid (or other distance-ladder) cosmology; this is an error in the literature that Feeney has uncovered.

That discussion devolved into a discussion of the Gaia likelihood function. Nowhere in the Gaia papers does it clearly say how to reconstruct a likelihood function for the stellar parallaxes from the catalog, though it does give a suggestion in the nice papers by Astraatmadja, such as this one. Astraatmadja is a Gaia insider, so his suggestion is probably correct, but there isn't an equivalent statement in the official data-release papers (to my knowledge). There is a big set of assumptions underlying this likelihood function (which is the one we use); we unpacked them a bit in the meeting. My position is that this is so important, it might be worth writing a short note for arXiv.

In Stars group meeting, Megan Bedell (Flatiron) showed her current status on measuring extremely precise radial velocities using data-driven models for the star and the tellurics. It is promising that her methods seem to be doing better than standard pipelines; maybe she can beat the world's best current precision?

Chuck Steidel (Caltech) gave a talk in the afternoon about things he can learn about ionizing photons from galaxies at high redshift by stacking spectra. He had a number of interesting conclusions. One is that high-mass-star binaries are important! Another is that escape fraction for ionizing photons goes up with the strength of nebular lines, and down with total UV luminosity. He had some physical intuitions for these results.

2017-09-26

machine learning

The day started with a somewhat stressful call with Hans-Walter Rix (MPIA), about applied-math issues: How to make sure that numerical (as opposed to analytic) derivatives are calculated correctly, how to make sure that linear-algebra operations are performed correctly when matrices are badly conditioned, and so on. The context is: Machine-learning methods have all sorts of hard numerical issues under the hood. If you can't follow those things up correctly, you can't do correct operations with machine-learning models. It's stressful, because wrongness here is wrongness everywhere.

Later in the morning, Kilian Walsh (NYU) brought to me some ideas about making the connections between dark-matter simulations and observed galaxies more flexible on the theoretical / interpretation side. We discussed a possible framework for immensely complexifying the connections between dark-matter halos and galaxy properties, way beyond the currently-ascendent HOD models. What we wrote down is interesting, but it might not be tractable.

2017-09-25

thermal relics

In a low-research day, I discussed probabilistic model results with Axel Widmark (Stockholm), a paper title and abstract with Megan Bedell (Flatiron), and Gaia DR2 Milky Way mapping with Lauren Anderson (Flatiron).

The research highlight of the day was an excellent brown-bag talk by Josh Ruderman (NYU) about thermal-relic models for dark matter. It turns out there is a whole zoo of models beyond the classic WIMP. In particular, the number-changing interactions don't need to interact with the visible sector. The models can be protected by dark-sector layers and have very indirect (or no) connection to our sector. We discussed the differences between models that are somehow likely or natural and models that are somehow observable or experimentally interesting. These two sets don't necessarily overlap that much!

2017-09-21

GPLVM Cannon

Today Markus Bonse (Darmstadt) showed me (and our group: Eilers, Rix, Schölkopf) his Gaussian-Process latent-variable model for APOGEE spectra. It looks incredible! With only a few latent variable dimensions, it does a great job of explaining the spectra, and its performance (even under validation) improves as the latent dimensionality increases. This is something we have wanted to do to The Cannon for ages: Switch to GP functions and away from polynomials.

The biggest issue with the vanilla GPy GPLVM implementation being used by Bonse is that it treats the data as homoskedastic—all data points are considered equal. When in fact we have lots of knowledge about the noise levels in different pixels, and we have substantial (and known) missing and bad data. So we encouraged him to figure out how to implement heteroskedasticity. We also discussed how to make a subspace of the latent space interpretable by conditioning on known labels for some sources.

2017-09-20

SDSS+Gaia

At our new weekly Gaia DR2 prep meeting, Vasily Belokurov (Cambridge) showed us a catalog made by Sergei Koposov (CMU) which joins SDSS imaging and Gaia positions to make a quarter-sky, deep proper-motion catalog. His point: Many projects we want to do with Gaia DR2 we can do right now with this new matched catalog!

At the Stars group meeting, Ruth Angus led a discussion of possible TESS proposals. These are due soon!

2017-09-19

unresolved binaries

Today Axel Widmark (Stockholm) showed up in NYC for two weeks of collaboration. We talked out various projects and tentatively decided to look at the unresolved binary stars in the Gaia data. That is, do some kind of inference about whether stars are single or double, and if double, what their properties might be. This is for stars that appear single to Gaia (but, if truly double, are brighter than they should be). I suggested we start by asking “what stars in the data can be composed of two other stars in the data?” with appropriate marginalization.

2017-09-18

latent-variable models for stars

The day started with various of us (Rix, Eilers, Schölkopf, Bonse) reviewing Bonse's early results on applying a GPLVM to stellar spectra. This looks promising! We encouraged Bonse to visualize the models in the space of the data.

The data-driven latent-variable models continued in the afternoon with Megan Bedell and I discussing telluric spectral models. We were able to debug a sign error and then make a PCA-like model for telluric variations! The results are promising, but there are continuum level issues everywhere, and I would like a more principled approach to that. Indeed, I could probably write a whole book about continuum normalization at this point (and still not have a good answer).

2017-09-17

regression

Our data-driven model for stars, The Cannon, is a regression. That is, it figures out how the labels generate the spectral pixels with a model for possible functional forms for that generation. I spent part of today building a Jupyter notebook to demonstrate that—when the assumptions underlying the regression are correct—the results of the regression are accurate (and precise). That is, the maximum-likelihood regression estimator is a good one. That isn't surprising; there are very general proofs; but it answers some questions (that my collaborators have) about cases where the labels (the regressors) are correlated in the training set.

2017-09-15

new parallel-play workshop

Today was the first try at a new group-meeting idea for my group. I invited my NYC close collaborators to my (new) NYU office (which is also right across the hall from Huppenkothen and Leistedt) to work on whatever they are working on. The idea is that we will work in parallel (and independently), but we are all there to answer questions, discuss, debug, and pair-code. It was intimate today, but successful. Megan Bedell (Flatiron) and I debugged a part of her code that infers the telluric absorption spectrum (in a data-driven way, of course). And Elisabeth Andersson (NYU) got kplr and batman installed inside the sandbox that runs her Jupyter notebooks.

2017-09-14

latent variable models, weak lensing

The day started with a call with Bernhard Schölkopf (MPI-IS), Hans-Walter Rix (MPIA), and Markus Bonse (Darmstadt) to discuss taking Christina Eilers's (MPIA) problem of modeling spectra with partial labels over to a latent-variable model, probably starting with the GPLVM. We discussed data format and how we might start. There is a lot of work in astronomy using GANs and deep learning to make data generators. These are great, but we are betting it will be easier to put causal structure that we care about into the latent-variable model.

At Cosmology & Data Group Meeting at Flatiron, the whole group discussed the big batch of weak lensing results released by the Dark Energy Survey last month. A lot of the discussion was about understanding the covariances of the likelihood information coming from the weak lensing. This is a bit hard to understand, because everyone uses highly informative priors (for good reasons, of course) from prior data. We also discussed the multiplicative bias and other biases in shape measurement; how might we constrain these independently from the cosmological parameters themselves? Data simulations, of course, but most of us would like to see a measurement to constrain them.

At the end of Cosmology Meeting, Ben Wandelt (Flatiron) and I spent time discussing projects of mutual interest. In particular we discussed dimensionality reduction related to galaxy morphologies and spatially resolved spectroscopy, in part inspired by the weak-lensing discussion, and also the future of Euclid.

2017-09-13

Gaia, asteroseismology, robots

In our panic about upcoming Gaia DR2, Adrian Price-Whelan and I have established a weekly workshop on Wednesdays, in which we discuss, hack, and parallel-work on Gaia projects in the library at the Flatiron CCA. In our first meeting we just said what we wanted to do, jointly edited a big shared google doc, and then started working. At each workshop meeting, we will spend some time talking and some time working. My plan is to do data-driven photometric parallaxes, and maybe infer some dust.

At the Stars Group Meeting, Stephen Feeney (Flatiron) talked about asteroseismology, where we are trying to get the seismic parameters without ever taking a Fourier Transform. Some of the crowd (Cantiello in particular) suggested that we have started on stars that are too hard; we should choose super-easy, super-bright, super-standard stars to start. Others in the crowd (Hawkins in particular) pointed out that we could be using asteroseismic H-R diagram priors on our inference. Why not be physically motivated? Duh.

At the end of Group Meeting, Kevin Schawinski (ETH) said a few words about auto-encoders. We discussed imposing more causal structure on them, and seeing what happens. He is going down this path. We also veered off into networks-of-autonomous-robots territory for LSST follow-up, keying off remarks from Or Graur (CfA) about time-domain and spectroscopic surveys. Building robots that know about scientific costs and utility is an incredibly promising direction, but hard.

2017-09-12

statistics of power spectra

Daniela Huppenkothen (NYU) came to talk about power spectra and cross-spectra today. The idea of the cross-spectrum is that you multiply one signal's Fourier transform against the complex conjugate of the others'. If the signals are identical, this is the power spectrum. If they differ by phase lags, the answer has an imaginary part, and so on. We then launched into a long conversation about the distribution of cross-spectrum components given distributions for the original signals. In the simplest case, this is about distributions of sums of products of Gaussian-distributed variables, where analytic results are rare. And that's the simplest case!

One paradox or oddity that we discussed is the following: In a long time series, imagine that every time point gets a value (flux value, say) that is drawn from a very skew or very non-Gaussian distribution. Now take the Fourier transform. By central-limit reasoning, all the Fourier amplitudes must be very close to Gaussian-distributed! Where did the non-Gaussianity go? After all, the FT is simply a rotation in data space. I think it probably all went into the correlations of the Fourier amplitudes, but how to see that? These are old ideas that are well understood in signal processing, I am sure, but not by me!

2017-09-11

EPRV

Today I met with Megan Bedell, who is just about to start work here in the city at the Flatiron Institute. We discussed our summer work on extreme precision radial-velocity measurements. We have come to the realization that we can't write a theory paper on this without dealing with tellurics and continuum, so we decided to face that in the short term. I don't want to get too bogged down, though, because we have a very simple point: Some ways of measuring the radial velocity saturate the Cramér–Rao bound, many do not!

2017-09-08

reconstruction, modifications to GR

The day started with a conversation with Elisabeth Andersson (NYU) about possible projects. We tentatively decided to look for resonant planets in the Kepler data. I sent her the Luger paper on TRAPPIST-1.

Before lunch, there was a great Astro Seminar by Marcel Schmittfull (IAS) about using the non-linearities in the growth of large-scale structure to improve measurements of cosmological parameters. He made two clear points (to me): One is that the first-order "reconstruction" methods used to run back the clock on nonlinear clustering can be substantially improved upon (and even small improvements can lead to large improvements in cosmological parameter estimation). The other is that there is as much information about cosmological parameters in the skewness as the variance (ish!). After his talk I asked about improving reconstruction even further using machine learning, which led to a conversation with Marc Williamson (NYU) about a possible pilot project.

In the afternoon, after a talk about crazy black-hole ideas from Ram Brustein (Ben-Gurion), Matt Kleban (NYU) and I discussed the great difficulty of seeing strong-field corrections to general relativity in gravitational-wave measurements. The problem is that the radiation signal is dominated by activity well outside the Schwarzschild radius: Things close to the horizon are highly time-dilated and red-shifted and so don't add hugely to the strong parts of the signal. Most observable signatures of departures from GR are probably already ruled out by other observations! With the standard model, dark matter, dark energy, and GR all looking like they have no observational issues, fundamental physics is looking a little boring right now!

2017-09-07

a non-parametric map of the MW halo

The day started with a call with Ana Bonaca (CfA), in which we discussed generalizing her Milky Way gravitational potential to have more structure, substructure, and freedom. We anticipate that when we increase this freedom, the precision with which any one cold stellar stream constrains the global MW potential should decrease. Eventually, with a very free potential, in principle each stream should constrain the gravitational acceleration field in the vicinity of that stream! If that's true, then a dense network of cold streams throughout the Milky Way halo would provide a non-parametric (ish) map of the acceleration field throughout the Milky Way halo!

In the afternoon I pitched new projects to Kate Storey-Fisher (NYU). She wants to do cosmology! So I pitched the projects I have on foregrounds for next-generation CMB and line-intensity mapping experiments, and my ideas about finding anomalies (and new statistics for parameter estimation) in a statistically responsible way. On the latter, I warned her that some of the relevant work is in the philosophy literature.

2017-09-06

MW dynamics

At Flatiron Stars Group Meeting, Chervin Laporte (Columbia) led a very lively discussion of how the Sagittarius and LMC accretion events into the Milky Way halo might be affecting the Milky Way disk. There can be substantial distortions to the disk from these minor mergers, and some of the action comes from the fact that the merging satellite raises a wake or disturbance in the halo that magnifies the effect of the satellite itself. He has great results that should appear on the arXiv soon.

After that, there were many discussions about things Gaia-related. We decided to start a weekly workshop-like meeting to prepare for Gaia DR2, which is expected in April. We are not ready! But when you are talking about billions of stars, you have to get ready in advance.

One highlight of the day was a brief chat with Sarah Pearson (Columbia), Kathryn Johnston (Columbia), and Adrian Price-Whelan (Princeton) about the formal structure of our cold-stream inference models, and the equivalence (or not) of our methods that run particles backwards in time (to a simpler distribution function) or forwards in time (to a simpler likelihood function). We discussed the possibility of differentiating our codes to permit higher-end sampling. We also discussed the information content in streams (work I have been doing with Ana Bonaca of Harvard) and the toy quality of most of the models we (and others) have been using.

2017-09-05

not much; but which inference is best?

Various tasks involved in the re-start of the academic year took out my research time today. But I did have a productive conversation with Alex Malz (NYU) about his current projects and priorities. One question that Malz asked is: Imagine you have various Bayesian inference methods or systems, each of which performs some some (say) Bayesian classification task. Each inference outputs probabilities over classes. How can you tell which inference method is the best? That's a hard problem! If you have fake data, you could ask which puts the highest probabilities on the true answer. Or you could ask which does the best when used in Bayesian decision theory, with some actions (decisions) and some utilities, or a bag of actors with different utilities. After all, different kinds of mistakes cost different actors different amounts! But then how do you tell which inference is best on real (astronomical) data, where you don't know what the true answer is? Is there any strategy? Something about predicting new data? Or is there something clever? I am out of my league here.

2017-09-01

#LennartFest day 3

Today was the last day of a great meeting. Both yesterday and today there were talks about future astrometric missions, including the Gaia extension, and also GaiaNIR, SmallJasmine, and Theia. In his overview talk on the latter, Alberto Krone-Martins put a lot of emphasis on the internal monitoring systems for the design, in which there will be lots of metrology of the spacecraft structure, optics, and camera. He said the value of this was a lesson from Gaia.

This point connects strongly to things I have been working on in self-calibration. In the long run, if a survey is designed properly, it will contain enough redundancy to permit self-calibration. In this sense, the internal monitoring has no long-term value. For example, the Gaia spacecraft includes a basic-angle monitor. But in the end, the data analysis pipeline will determine the basic angle continuously, from the science data themselves. They will not use the monitor data directly in the solution. The reason is: The information about calibration latent in the science data always outweighs what's in the calibration data.

That said (and Timo Prusti emphasized this to me), the internal monitoring and calibration data are very useful for diagnosing problems as they arise. So I'm not saying you don't value such systems and data; I'm saying that you should still design your projects to you don't need them at the end of the day. This is exactly how the SDSS imaging-data story played out, and it was very, very good.

I also gave my own talk at the meeting today. My slides are here. I think I surprised some part of the audience when I said that I thought we could do photometric parallax at all magnitudes without ever using any physical or numerical model of stars!

One thing I realized, as I was giving the talk, is that there is a sense in which the data-driven models make very few assumptions indeed. They assume that Gaia's geometric parallax measurements are good, and that it's noise model is close to correct. But the rest is just very weak assumptions about functional forms. So there is a sense in which our data-driven model (or a next-generation one) is purely geometric. Photometric parallaxes with a purely geometric basis. Odd to think of that.

At the end of the meeting, Amina Helmi told me about vaex, which is a very fast visualization tool for large data sets, built on clever data structures. I love those!

2017-08-31

#LennartFest day 2

Many great things happened at the meeting today; way too many to mention. Steinmetz showed how good the RAVE-on results are, and nicely described also their limitations. Korn showed an example of an extremely underluminous star, and discussed possible explanations (most of them boring data issues). Brown explained that with a Gaia mission extension, the parameter inference for exoplanet orbit parameters can improve as a huge power (like 4.5?) of mission lifetime. That deserves more thought. Gerhard explained that the MW bar is a large fraction of the mass of the entire disk! Helmi showed plausible halo substructure and got me really excited about getting ready for Gaia DR2. In the questions after her talk, Binney claimed that galaxy halos don't grow primarily by mergers, not even in theory! Hobbs talked about a mission concept for a post-Gaia NIR mission (which would be incredible). He pointed out that the reference frame and stellar positions require constant maintenance; the precision of Gaia doesn't last.

One slightly (and embarrassingly) frustrating thing about the talks today was that multiple discussed open clusters without noting that we found a lot ourselves. And several discussed the local standard of rest without mentioning our value. Now of course I (officially) don't mind; neither of these are top scientific objectives for me (I don't even think the LSR exists). But it is a Zen-like reminder not to be attached to material things (like citations)!

2017-08-30

#LennartFest day 1

I broke my own rules and left #AstroHackWeek to catch up with #LennartFest. The reason for the rule infraction is that the latter meeting is the retirement celebration of Lennart Lindegren (Lund) who is one of the true pioneers in astrometry, and especially astrometry in space and at scale. My loyal reader knows his influence on me!

Talks today were somewhat obscured by my travel exhaustion. But I learned some things! Francois Mignard (Côte d'Azur) gave a nice talk on the reference frame. He started with an argument that we need a frame. I agree that we want inertial proper motions, but I don't agree that they have to be on a coordinate grid. If there is one thing that contemporary physics teaches us it is that you don't need a coordinate system. But the work being done to validate the inertial-ness of the frame is heroic, and important.

Floor van Leewen (Cambridge) spoke about star clusters. He hypothesized—and then showed—that proper motions can be as informative about distance as parallaxes, especially for nearby clusters. This meshes with things Boris Leistedt (NYU) and I have been talking about, and I think we can lay down a solid probabilistic method for combining these kinds of information responsibly.

Letizia Capitanio (Paris) reminded us (I guess, but it was new to me) that the Gaia RVS instrument captures a diffuse interstellar band line. This opens up the possibility that we could do kinematic dust mapping with Gaia! She also showed some competitive dust maps based on Gaussian Process inferences.

2017-08-29

#AstroHackWeek day 2

Today Jake vanderPlas (UW) and Boris Leistedt (NYU) gave morning tutorials on Bayesian inference. One of Leistedt's live exercises (done in Jupyter notebooks in real time) involved increasing model complexity in a linear fit and over-fitting. I got sucked into this example:

I made a model where y(x) is a (large) sum of sines and cosines (like a Fourier series). I used way more terms than there are data points, so over-fitting is guaranteed, in the case of maximum likelihood. I then did Bayesian inference with this model, but putting a prior on the coefficients that is more restrictive (more informative) as the wave number increases (or the wavelength decreases). This model is a well-behaved Gaussian process! It was nice to see the continuous evolution from fitting a rigid function to fitting a Gaussian process, all in just a few lines of Python.

2017-08-28

#AstroHackWeek day 1

AstroHackWeek 2017 kicked off today, with Adrian Price-Whelan (Princeton) and I doing a tutorial on machine learning. We introduced a couple of ideas and simple methods, and then we set ten (randomly assigned) groups working on five methods on two different data sets. We didn't get very far! But we tried to get the discussion started.

In the afternoon hacking sessions, Price-Whelan and I looked at some suspcious equations in the famous Binney & Tremaine book, 2ed. In Chapter 4, there are lots of integrals of phase-space densities and we developed an argument that some of these equations must have wrong units. We can't be right—Binney & Tremaine can't be wrong—because all of Chapter 4 follows from these equations. But we don't see what we are doing wrong.

[Note added later: We were wrong and B&T is okay; but they don't define their distribution functions very clearly!]

2017-08-25

#Eclipse2017

I just returned from an off-grid eclipse trip; hence no posting. Also, it was vacation! The total eclipse (which I saw in central Oregon) was extremely dramatic: The air got cold, the sky got so dark we could see the stars, and there was a quasi-sunset around all the horizon. The corona of the Sun could be seen, during totality, out to more than a Solar Diameter off the limb of the Sun. During late stages of partiality, the shadows created an amazing demonstration of the sense in which a shadow is a convolution of a mask with the illumination, and how that convolution depends on distance (between the object and its shadow). I have much more to say about all this (and photographs), but I will save it for one of my other blogs!

The only research I did during the week was discussion of near-term projects with Bernhard Schölkopf (MPI-IS) and Hans-Walter Rix (MPIA), which will involve building latent-variable models (and related things) on stellar spectra. In some down time, Schölkopf also helped me with some non-trivial linear algebra!

2017-08-15

a mistake in an E-M algorithm

[I am on quasi-vacation this week, so only posting irregularly.]

I (finally—or really for the N-th time, because I keep forgetting) understood the basis of E-M algorithms for optimizing (what I call) marginalized likelihoods in latent-variable models. I then worked out the equations for the E-M step for factor analysis, and a generalization of factor analysis that I hope to use in my project with Christina Eilers (MPIA).

Imagine my concern when I got a different update step than I find in the writings of my friend and mentor Sam Roweis (deceased), who is the source of all knowledge, as far as I am concerned! I spent a lot of time looking up stuff on the web, and most things agree with Roweis. But finally I found this note by Andrew Ng (Stanford / Coursera), which agrees with me (and disagrees with Roweis).

If you care about the weeds, the conflict is between equation (8) in those Ng notes and page 3 of these Roweis notes. It is a subtle difference, and it takes some work to translate notation. I wonder if the many documents that match Roweis derive from (possibly unconscious) propagation from Roweis, or whether the flow is in another direction, or whether it is just that the mistake is an easy one to make? Oddly, Ng decorates his equation (8) with a warning about an error you can easily make, but it isn't the error that Roweis made.

So much of importance in computer science and machine learning is buried in lecture notes and poorly indexed documents in user home pages. This is not a good state of affairs!

2017-08-11

serious bugs; dimensionality reduction

Megan Bedell (Chicago) and I had a scare today: Although we can show that in very realistically simulated fake data (with unmodeled tellurics, wrong continuum, and so on) a synthetic spectrum (data-driven) beats a binary mask for measuring radial velocities, we found that in real data from the HARPS instrument that the mask was doing better. Why? We went through a period of doubting everything we know. I was on the point of resigning. And then we realized it was a bug in the code! Whew.

Adrian Price-Whelan (Princeton) also found a serious bug in our binary-star fitting. The thing we were calculating as the pericenter distance is the distance of the primary-star center-of-mass to the system barycenter. That's not the minimum separation of the two stars! Duh. That had us rolling on the floor laughing, as the kids say, especially since we might have gotten all the way to submission without noticing that absolutely critical bug.

At the end of the day, I gave the Königstuhl Colloquium, on the blackboard, about dimensionality reduction. I started with a long discussion about what is good and bad about machine learning, and then went (too fast!) through PCA, ICA, kernel-PCA, PPCA, factor analyzers, HMF, E-M algorithms, latent-variable models, and the GPLVM, drawing connections between them. The idea was to give the audience context and jumping-off points for their projects.

2017-08-10

micro-tellurics

Today, in an attempt to make our simulated extreme-precision radial-velocity fake data as conservative as possible, Megan Bedell (Chicago) and I built a ridiculously pessimistic model for un-modeled (and unknown) telluric lines that could be hiding in the spectra, at amplitudes too low to be clearly seen in any individual spectrum, but with the full wavelength range bristling with lines. Sure enough, these “micro-tellurics” (as you might call them) do indeed mess up radial-velocity measurements. The nice thing (from our perspective) is that they mess up the measurements in a way that is co-variant with barycentric velocity, and they mess up synthetic-spectrum-based RV measurements less than binary-mask-based RV measurements.

At MPIA Galaxy Coffee, Irina Smirnova-Pinchukova (MPIA) gave a great talk about her trip on a SOFIA flight.

2017-08-09

machine learning, twins, excitation temperature

After our usual start at the Coffee Nerd, it was MW Group Meeting, where we discussed (separately) Cepheids and Solar-neighborhood nucleosynthesis. On the latter, Oliver Philcox (St Andrews) has taken the one-zone models of Jan Rybizki (MPIA) and made them 500 times faster using a neural-network emulator. This emulator is tuned to interpolate a set of (slowly computed) models very quickly and accurately. That's a good use of machine learning! Also, because of backpropagation, it is possible to take the derivatives of the emulator with respect to the inputs (I think) and therefore you also get derivatives, for optimization and sampling.

The afternoon's PSF Coffee meeting had presentations by Meg Bedell (Chicago) about Solar Twin abundances, and by Richard Teague (Michigan) about protoplanetary disk TW Hya. On the former, Bedell showed that she can make extremely precise measurements, because a lot of theoretical uncertainties cancel out. She finds rock-abundance anomalies (that is, abundance anomalies that are stronger in high-condensation-temperature lines) all over the place, which is context for results from Semyeong Oh (Princeton). On TW Hya, Teague showed that it is possible to get pretty consistent temperature information out of line ratios. I would like to see two-dimensional maps of those: Are there embedded temperature anomalies in the disk?

2017-08-08

latent-variable model; bound-saturating EPRV

Today, Christina Eilers (MPIA) and I switched her project over to a latent variable model. In this model, stellar spectra (every pixel of every spectrum) and stellar labels (Teff, logg, and so on for every star) are treated on an equal footing as “data”. Then we fit an underlying low-dimensional model to all these data (spectra and labels together). By the end of the day, cross-validation tests were pushing us to higher and higher dimensionality for our latent space, and the quality of our predictions was improving. This seems to work, and is a fully probabilistic generalization of The Cannon. Extremely optimistic about this!

Also today, Megan Bedell (Chicago) built a realistic-data simulation for our EPRV project, and also got pipelines working that measure radial velocities precisely. We have realistic, achievable methods that saturate the Cramér–Rao bound! This is what we planned to do this week not today. However, we have a serious puzzle: We can show that a data-driven synthetic spectral template saturates the bound for radial-velocity measurement, and that a binary mask template does not. But we find that the binary mask is so bad, we can't understand how the HARPS pipeline is doing such a great job. My hypothesis: We are wrong that HARPS is using a binary mask.

2017-08-07

linear models for stars

My loyal reader knows that my projects with Christina Eilers (MPIA) failed during the #GaiaSprint, and we promised to re-group. Today we decided to take one last attempt, using either heteroskedastic matrix factorization (or other factor-analysis-like method) or else probabilistic principal components analysis (or a generalization that would be heteroskedastic). The problem with these models is that they are linear in the data space. The benefit is that they are simple, fast, and interpretable. We start tomorrow.

I made a plausible paper plan with Megan Bedell (Chicago) for our extreme-precision radial-velocity project, in which we assess the information loss from various methods for treating the data. We want to make very realistic experiments and give very pragmatic advice.

I also watched as Adrian Price-Whelan (Princeton) used The Joker to find some very strange binary-star systems with red-clump-star primaries: Since a RC star has gone up the giant branch and come back down, it really can't have a companion with a small periastron distance! And yet...

2017-08-06

enfastenating

Various hacking sessions happened in undisclosed locations in Heidelberg this weekend. The most productive moment was that in which—in debugging a think-o about how we combine independent samplings in The Joker—Adrian Price-Whelan (Princeton) and I found a very efficient way to make our samplings adapt to the information in the data (likelihood). That is, we used a predictive adaptation to iteratively expand the number of prior samples we use to an appropriate size for our desired posterior output. (Reminder: The Joker is a rejection sampler.) This ended up speeding up our big parallel set of samplings by a factor of 8-ish!

2017-08-04

M-dwarf spectral types; reionization

Jessica Birky (UCSD) and I met with Derek Homeier (Heidelberg) and Matthias Samland (MPIA) to update them on the status of the various things Birky has been doing, and discuss next steps. One consequence of this meeting is that we were able to figure out a few well-defined goals for Birky's project by the end of the summer:

Because of a combination of too-small training set and optimization issues in The Cannon, we don't have a great model for M-dwarf stars (yet) as a function of temperature, gravity, and metallicity. That's too bad! But on the other hand, we do seem to have a good (one-dimensional) model of M-dwarf stellar spectra as a function of spectral type. So my proposal is the following: We use the type model to paint types onto all M-dwarf stars in the APOGEE data set, which will probably correlate very well with temperature in a range of metallicities, and then use those results to create recommendations about what spectral modeling would lead to a good model in the more physical parameters.

Late in the day, José Oñorbe (MPIA) gave a great talk about the empirical study of reionization. He began with a long and much needed review of all the ways you can measure reionization, using radio imaging, lyman-alpha forest, damping wings, cosmic microwave background polarization, and so on. This brought together a lot of threads I have been hearing about over the last few years. He then showed his own work on the lyman-alpha forest, where they exploit the thermodynamic memory the low-density gas has about its thermal history. They get good results even with fairly toy models, which is very promising. All indicators, by the way, suggest a very late reionization (redshifts 7 to 9 for the mid-point of the process). That's good for observability.

2017-08-03

planning; marginalization

I had phone calls with Megan Bedell (Chicago) and Lauren Anderson (Flatiron) to discuss near-term research plans. Anderson and I discussed whether the precise MW mapping we were doing could be used to measure the length, strength, and amplitude of the Milky Way bar. It looks promising, although (by bad luck), the 2MASS sensitivity to red-clump stars falls off right around the Galactic Center (even above the plane and out of the dust). There are much better surveys for looking at the Galactic center region.

Bedell and I contrasted our plans to build a data-driven extreme-precision radial-velocity (EPRV) pipeline with our plans to write something more information-theoretic and pragmatic about how to maximize RV precision. Because our data-driven pipeline requires some strong applied math, we might postpone that to the Fall, when we are co-spatial with math expertise in New York City.

I was pleased by a visit from Joe Hennawi (UCSB) and Fred Davies (MPIA / UCSB) in which they informed me that some comments I made about sampling approximations to marginalizations changed their strategy in analyzing very high redshift quasars (think z>7) for IGM damping wing (and hence reionization). We discussed details of how you can use a set of prior-drawn simulations to do a nuisance-parameter marginalization (in this case, over the phases of the simulation).

2017-08-02

graphical models; bugs

At MPIA MW group meeting, Semyeong Oh (Princeton) described her projects to find—and follow up—co-moving stellar pairs and groups in the Gaia TGAS data. She presented the hypothesis test (or model comparison) by showing the two graphical models, which was an extremely informative and compact way to describe the problem. This led to a post-meeting discussion of graphical models and how to learn about them. There is no really good resource for astronomers. We should write one!

I spent the afternoon with Matthias Samland (MPIA) and Jessica Birky (UCSD), debugging code! Samland is adding a new kind of systematics model to his VLT-SPHERE data analysis. Birky is hitting the limitations of some of our code that implements The Cannon. I got a bit discouraged about the latter: The Cannon is a set of ideas, not a software package! That's good, but it means that I don't have a perfectly reliable and extensible software package.

2017-08-01

Simpson's paradox

I spent part of the day working through Moe & Di Stefano (2017), which is an immense and comprehensive paper on binary-star populations. The reason for my attention: Adrian Price-Whelan (Princeton) and I need a parameterization for the binary-star population work we are doing in APOGEE. We are not going to make the same choices as those made by Moe, but there is tons of relevant content in that paper. What a tour de force!

I spent part of the afternoon crashing the RAVE Collaboration meeting at the University of Heidelberg. I learned many things, though my main point was to catch up with Matijevic, Minchev, Steinmetz, and Freeman! Ivan Minchev (AIP), in his talk, discussed relationships between age, metallicity, and Galactocentric radius for stars in the disk. He has a beautiful example of Simpson's paradox, in which, for small population slices (age slices), the lower metallicity stars have higher tangential velocities, but overall the opposite is true, causing measured gradients to depend very strongly on the measurement uncertainties (because: Can you slice the populations finely enough in age?). We discussed paths to resolving this with a proper generative model of the data.

2017-07-31

nuisance model for imaging

The CPM of Wang et al and the transit search methods of Foreman-Mackey et al were developed by us to account for and remove or obviate systematic issues with the Kepler imaging. Last summer, Matthias Samland (MPIA) pointed out that these could be used in direct imaging of exoplanets, which is another place where highly informative things happen in the nuisance space. Today we worked through the math and code that would make a systematics-marginalized search for direct detections of planets in the VLT-SPHERE imaging data. It involves finding a basis of time variations of pixels in the device (pixels, not patches, which is odd and at odds with the standard practice), choosing a prior on these that makes sense, fitting every pixel in the relevant part of the device as sum of variations plus exoplanet, but marginalizing out the former.

2017-07-30

regularize all the things

On the weekend, Bernhard Schölkopf (Tübingen) showed up in Heidelberg to hang out and talk shop. What an honor and pleasure! We spent time im Garten discussing various things, but he was particularly insightful in the projects we have been doing with Christina Eilers (MPIA) on extending The Cannon to situations where stellar labels (even in the training set) are either noisy or missing. As we described the training and test steps, we drew graphical models and then looked at the inconsistencies of those graphical models—or not really inconsistencies, but limitations. We realized that we couldn't keep the model interpretable (which is a core idea underlying The Cannon) without putting stronger priors on both the label space (the properties of stars) and the coefficient space (the control parameters of the spectral expectation). If we put on these priors, the model ought to get regularized into a sensible place. I think I know how to do this!

He also pointed out that a probabilistic version of The Cannon would look a lot like the GPLVM (Gaussian Process latent-variable model). That means that there might be out-of-the-box code that could conceivably help us. I am slightly suspicious,
because my idea of the priors or regularization in the label domain is so specific, astrophysical, and informative. But it is worth thinking about this.

2017-07-28

destroyer of worlds

One of my main research accomplishments today was to work up a project proposal for Yuan-Sen Ting (ANU) and others about finding stars whose spectra suggest that they have (recently) swallowed a lot of rocky material. This was inspired by a few things: The first is that Andy Casey (Monash) can find Li-rich stars in LAMOST just by looking at the residuals away from a fit by The Cannon at the location of Li lines. The second is that Semyeong Oh (Princeton) and various collaborators have found Sun-like stars that look like they have swallowed many Earth masses of rock in their recent pasts, by doing (or having John Brewer of Yale do) detailed chemical abundance work on the spectra. The third is that Yuan-Sen Ting has derivatives of spectral expectations with respect to all elements for LAMOST-like spectra.

At the end of the day, Hans-Walter Rix (MPIA) gave a colloquium on the After-Sloan-IV project, which my loyal reader knows a lot about. I learned things in his talk, however: One is that SDSS-III BOSS has found several broad (ish) lined quasars that shut off between SDSS-I and SDSS-III. One relevant paper is here. Another is that he (with Jonathan Bird of Vandy) has made some beautiful visualizations of the point of doing dense sampling of the giant stars in the Milky Way disk.

2017-07-27

cosmological foregrounds; Cannon extensions

At MPIA Galaxy Coffee, Daniel Lenz (JPL) spoke about foregrounds and component separation in CMB and LSS experiments. He emphasized (and I agree completely) that the dominant problem for next-generation ("Stage-4" in the unpleasant terminology of cosmologists) cosmology experiments—be they CMB, LSS, or line intensity mapping—is component separation or foreground inferences. He showed some nice results using generalized linear models of optical data for Milky-Way dust inferences. Afterwards I pitched him my ideas about latent variable models (all vapor ware right now).

Late in the day, Christina Eilers (MPIA) and I met to discuss why our project to fit for both labels and spectral model in a new version of The Cannon didn't work. I have various theories, most of which relate to some unholy mix of the curse of dimensionality (such that optimization of a model is a bad idea) and model wrongness (such that the model is trying to use the freedom it has inappropriately). But I am seriously confused. We worked through all the possible directions and realized that we need to re-group with our full team to decide what to do next. I assigned myself two things: The first is to look at marginalization of The Cannon internals (that is, what marginalizations might be analytic?). The second is to look at the machine-learning literature on the difference between optimizing a model for prediction accuracy as opposed to optimizing it for model accuracy (or likelihood).

2017-07-26

fitting a line, now with fewer errors

[I was on vacation for a few days.]

I spent a tiny bit of time on my vacation working on fixing the wrong parts of section 7 of my paper with Bovy and Lang on fitting a line to data. I am close to a new wording I am happy with, and with corrected equations. I then realized that there are a mess of old issues to look at; I might do that too before I re-post it to arXiv.

2017-07-21

#GaiaSprint, day 5

Today was the last day of the 2017 Heidelberg Gaia Sprint. Every participant prepared a single slide in a shared slide deck (available here), and had 120 seconds to present their results. Look at the slides for the full story, but it was really impressive! A few highlights for me were:

Rix and Fouesneau used common proper motions to match Gaia DR1 TGAS stars to much-fainter PanSTARRS companions, and found hundreds of white dwarf binaries, with a clear, complete white-dwarf sequence. Hawkins was able to separate red clump stars from other RGB stars with a data-driven spectral classifier, and to interpret it. Ting found similar, but working just with the spectral labels fit to spectra with physical models. El-Badry showed that stars he finds are binaries, spectroscopically (and he can find them even if the velocity differences vanish) are above the main sequence in the color—magnitude diagram.

Beaton showed that an old statistical-parallax calibration of RR Lyrae stars by Kollmeier turns out to be strongly confirmed in the TGAS data. Birky built a beautiful one-dimensional model of M-dwarf spectra in APOGEE using only a single label, which is literature spectral classifications. Burggraaff has a possible vertical-direction moving group coming through our local position in the Milky Way disk. Coronado found that she can calibrate main-sequence star luminosities using spectral labels to almost the quality of other standard candles. Rybizki made progress towards an empirical set of supernova yields, starting with APOGEE abundances and (poor) stellar ages.

And, as I have mentioned before, Casey showed that we might be able to do asteroseismology with Gaia, and Anderson made incredible maps of the Milky-Way disk (and animations of slices thereof!).

2017-07-20

#GaiaSprint, day 4

Today Lauren Anderson (Flatiron) and Adrian Price-Whelan (Princeton) made beautiful visualizations of Anderson's 20-million star catalog with distances, built by training a model on the TGAS Catalog and applying it to plausibly-red-clump stars in the billion-star catalog from Gaia. I give an example below, which shows two thin slices of the Milky Way, one through the Sun, and one through the Galactic Center (but blotted out by local dust).

Andy Casey (Monash) got our asteroseismology project working with real data! He sub-sampled some Kepler light curves down to something like Gaia end-of-mission cadence, and then applied the Stephen Feeney (Flatiron) likelihood function. Again, it has peaks at reasonable asteroseismic parameters, near the KASC published values. We are slowly developing some intuitions about what parameters are well constrained and where.

After four days of hacking on The Cannon but with probabilistic (noisy and missing) labels, Christina Eilers (MPIA) and I gave up: We worked out the bugs, got the optimizer working, and realized that our issues are fundamentally conceptual: When you have a bad model for your data (that is, a model that is ruled out strongly by the data), there can be conflicts between model accuracy and prediction accuracy. We have hit one of those conflicts. We need to re-group on this one.


2017-07-19

#GaiaSprint, day 3

Today we got amazing success with an incredibly simple (read: dumb-ass) project for making precise maps of the Milky Way: Lauren Anderson (Flatiron) and I built a data-driven model of dust extinction, using the red-clump stars in the TGAS sample that we deconvolved last month. We then applied this dust inference to every single star in the full billion-star catalog (requiring 2MASS photometry), and selected stars whose dust-corrected color is consistent with being a RC star. That is, we assumed that every star with the correct de-reddened color is a RC star. RC stars are standard candles, so then we could map the entire MW disk. The maps are precise, but contaminated. So much structure visible. Adrian Price-Whelan (Princeton) says we are seeing a flaring disk!

2017-07-18

#GaiaSprint, day 2

Gaia Sprint continued today with Christina Eilers (MPIA) and I puzzling over the behavior of her code that is an extension of The Cannon to the case in which there are label uncertainties on the training-set stars. The behavior of the code is odd: As we give the code less freedom, the model of the stellar spectra gets better but the prediction gets worse. Makes no sense! The optimization is huge, and it relies on hand-typed analytic derivatives (I know, I know!), so we don't know whether we have conceptual issues or bugs.

Meanwhile, Andy Casey (Monash) and Ana Bonaca (Harvard) got excited about doing asteroseismology with the sparse photometric light curves that will be produced by Gaia. In particular, Casey got Stephen Feeney's (Flatiron) fake-data generator and likelihood function code (made for TESS-like data) working for Gaia-like data. He finds peaks in the likelihood function! Which means that maybe we can do asteroseismology without taking a Fourier Transform. His results, however, challenged both of our intuitions about the information about nu-max and delta-nu that ought to reside in any data stream. Inspired by all this, Bonaca and Donatas Narbutis (Lithuania) looked up large HST programs on stellar clusters and showed that it is plausible that we could do asteroseismology in HST too!

In other news, Mariangela Lisanti (Princeton) worked through recent results on dynamical friction in an ultralight-scalar dark-matter model (where the dark matter has a de Broglie wavelength that is kpc in scale!) and has plausible evidence that the timing argument (for the masses of local-group objects) might rule out or constrain ULS dark matter. And Anthony Brown (Leiden) and Olivier Burggraaff (Leiden) showed me an update of Jo Bovy and my (2009) extreme-deconvolution model of the local MW disk velocity field, and they find some structure in the vertical direction, which is cool and intriguing.

2017-07-17

#GaiaSprint, day 1

Today was the first day of the 2017 Heidelberg Gaia Sprint. It was the first day of the meeting but nonetheless an impressive day of accomplishments. The day started with a pitch session in which each of the 47 participants was given one slide and 120 seconds to say who they are and what they want to do or learn at the Sprint. These pitch slides are here.

After the pitch, my projects launched well: Jessica Birky (UCSD) was able to get the new version of The Cannon created by Christina Eilers (MPIA) working and started to get some seemingly valuable spectral models out of the M-dwarf spectra in APOGEE. Lauren Anderson (Flatiron) set up and trained a data-driven (empirical) model for the extinction of red stars, based on the Gaia and 2MASS photometry.

Perhaps the most impressive accomplishment of the day is that Morgan Fouesneau (MPIA) and Hans-Walter Rix (MPIA) matched stars between Gaia TGAS and the new GPS1 catalog that puts proper motions onto all PanSTARRS stars. They find co-moving stars where the brighter is in TGAS and the fainter is in GPS1. These pairs are extremely numerous. Many are main-sequence pairs but many pair a main-sequence star in TGAS with a white dwarf in GPS1. These pairs identify white dwarfs but also potentially put cooling ages onto both stars in the pair. The white-dwarf sequence they find is beautiful. Exciting!

2017-07-13

M-dwarf expertise

Jessica Birky (UCSD) and I met with Wolfgang Brandner (MPIA) and Derek Homeier (MPIA) to discuss M-dwarf spectra. Homeier has just finished a study of a few dozen M-dwarfs in APOGEE with the PHOENIX models. We are going to find out whether this set of stars will constitute an adequate training set for The Cannon. It is very weighted to a small temperature range, so it might not have enough coverage for us. We learned a huge amount in our meeting, like whether rotation might affect us (or be detectable), whether binaries might be common in our sample, and whether we might be able to use photometry (or photometry plus astrometry) to get effective temperatures. The conversation was very wide ranging and I learned a huge amount.

2017-07-12

Bayes Cannon, asteroseismology, binaries

Today, at MPIA Milky Way Group Meeting, I presented my thinking about Stephen Feeney (Flatiron), Ana Bonaca (Harvard), and my project on doing asteroseismology without the Fourier Transform. I am so excited about the (remote, perhaps) possibility that Gaia might be able to measure delta-nu and nu-max for many stars! Possible #GaiaSprint project?

Before me, Kareem El-Badry (Berkeley) talked about how wrong your inferences about stars can be when you model the spectrum without considering binarity. This maps on to a lot of things I discuss with Tim Morton (Princeton) in the area of exoplanet science. Also Yuan-Sen Ting (ANU) spoke about using t-SNE to look for clustering of stars in chemical space.

I spent the early morning writing up a safe-for-methodologists (think: statisticians, mathematicians, and computer scientists) description of The Cannon's likelihood function, when the stellar labels themselves are poorly known (really the project of Christina Eilers here at MPIA). I did this because Jonathan Weare (Chicago) has proposed that he can probably sample the full posterior. I hope that is true! It would be a probabilistic tour de force.

2017-07-11

not ready for #GaiaSprint

Lauren Anderson (Flatiron) showed up at MPIA today to discuss #GaiaSprint projects and our next projects more generally. We discussed a possible project in which we try to use the TGAS data to infer the relationships between extinction and intrinsic color for red-giant stars, and then use those relationships in the billion-star catalog to predict parallaxes for DR2 (and also learn the dust map and the spatial distribution of stars in the Milky Way).

2017-07-10

asteroseismology; toy model potentials; dwarfs vs giants

Stephen Feeney (Flatiron) sent me plots today that suggest that we can measure asteroseismic nu-max and delta-nu for a red-giant star without ever taking the Fourier Transform of the data. Right now, there are still many issues: This is still fake data, which is always cheating. The sampler (despite being nested and all) gets stuck in poor modes (and this problem is exceedingly multimodal). But when we inspect the sampling after the fact, the good answer beats the bad answers in likelihood by a huge ratio, which suggests that we might be able to do asteroseismology at pretty low signal-to-noise too. We need to move to real data (from Kepler).

Because of concern that (in our stellar-stream project) we aren't marginalizing out all our unknowns yet—and maybe that is making things look more informative than they are—Ana Bonaca (Harvard) stared today on including the progenitor position in our Fisher-matrix (Cramér-Rao) analysis of all stellar streams. We also have concerns about the rigidity of the gravitational potential model (which is a toy model, in keeping with the traditions of the field!). We discussed also marginalizing out some kind of perturbation expansion around that toy model. This would permit us to both be more conservative, and also criticize the precisions obtained with these toy models.

Jessica Birky (UCSD) looked at chi-square differences (in spectral space) between APOGEE spectra of low-temperature stars without good labels and two known M-type stars, one giant and one dwarf. This separated all the cool stars in APOGEE easily into two classes. Nice! We are sanity-checking the answers. We are still far, however, from having a good training set to fire into The Cannon.

2017-07-07

M dwarfs, The Cannon, binaries, streams, corrections, and coronography

So many projects! I love my summers in Heidelberg. I spent time working through the figures that would support a paper on M-dwarf stars with The Cannon with Jessica Birky (UCSD) today. She has run The Cannon on a tiny training set of M-dwarf stars in the APOGEE data, and it seems to work (despite the size and quality of our training set). We are diagnosing whether it all makes sense now.

With Christina Eilers (MPIA), Hans-Walter Rix (MPIA) and I discussed the amazing fact that she can optimize (a more sophisticated version of) The Cannon on all internal parameters and all stellar labels in a single shot; this is a hundred-thousand-parameter non-linear least-square fit! It seems to be working but there are oddities to follow up. She is dealing with the point that many stars have bad, missing, or noisy labels.

With Kareem El-Badry (Berkeley), Rix and I worked through the math of going from an SB2 catalog (that is, a catalog of stars known to be binary because their spectra are better fit by superpositions of pairs of stars than by single stars) through to a population inference about the binary population. This project meshes well with the plans that Adrian Price-Whelan (Columbia) and I have for the summer.

With Ana Bonaca (Harvard), I discussed further marginalizations in her project to determine the information content in stellar streams. She finds that the potential form and the progenitor phase-space information are very informative; that is, if we relax those to give more freedom, we expect to find that the streams are less constraining of the Galactic potential. We discussed ways to test this in the next few days.

With Stephen Feeney (Flatiron) and Daniel Mortlock (Imperial) I discussed the possibility of writing a paper about the Lutz-Kelker correction (don't do it!) and posterior probabilistic catalogs (don't make them!) and what scope it might have. We tentatively decided to try to put something together.

With Matthias Samland (MPIA) and Jeroen Bouwman (MPIA) I discussed their ideas to move the CPM (which we used to de-trend Kepler and K2 light curves) to the domain of direct detection of exoplanets with coronographs. This is a great idea! We discussed the way to choose predictor pixels, and the form that the likelihood takes when you marginalize out the superposition of predictor pixels. This is a very promising software direction for future coronograph missions. But we noticed that many projects and observing runs might be data-limited: People take hundreds of photon-limited exposures instead of thousands of read-noise-limited exposures. I think that's a mistake: No current results are, in the end, photon-noise limited! We put Samland onto looking at the subspace in which the pixel variations live.

I love my job!

2017-07-06

nothing

I had a whole day on the train, back from Potsdam. That didn't translate into a whole day of research.

2017-07-05

Quillen

I spent the day at Potsdam, to participate (and give a talk) in the Wempe Award ceremony; the prize went to Alice Quillen (Rochester), who has done dynamical theory on a huge range of scales and in a huge range of contexts. I spoke about how data-driven models of stars might make it possible to precisely test Quillen's predictions. After my talk I had a long session with Ivan Minchev (AIP), Christina Chiappini (AIP), and Friedrich Anders (AIP) about work on stellar chemical abundances in the disk. They are trying to understand whether the alpha-rich disk itself splits into multiple populations or is just one. We discussed the possibility that any explanation of the alpha-to-Fe vs Fe-to-H plot ought to make predictions for other galaxies. Right now theoretical expectations are soft, both because star formation is not right in the cosmological models, and because nucleosynthetic yields are not right in the chemical evolution models. We also discussed Anders's use of t-SNE for dimensionality reduction and how we might test its properties (the properties of t-SNE, that is).

2017-07-04

computing stable derivatives

In my science time today, I worked with Ana Bonaca (Harvard) on her computation of derivatives—of stellar stream properties with respect to potential parameters. This is all part of our information-theoretic project on stellar streams. We are taking the derivatives numerically, which is challenging to get right, and we have had many conversations about step sizes and how to choose them. We made (what I hope are) final choices today: They involve computing the derivative at different step sizes, comparing each of those derivatives to those computed at nearby step sizes, and finding the smallest step size at which converged or consistent derivatives are being computed. Adaptive and automatic! But a pain to get working right.

Numerical context: If you take derivatives with step sizes that are too small, you get killed by numerical noise. If you take derivatives with step sizes that are too large, the changes aren't purely linear in the stepped parameter. The Goldilocks step size is not trivial to find.

2017-07-03

models of stellar spectroscopy

Today was my first day at MPIA. I worked with Hans-Walter Rix (MPIA) and Christina Eilers (MPIA) on her new version of The Cannon, which simultaneously optimizes the model and the labels, with label uncertainties. It is a risky business for a number of reasons, one of which is that maximum likelihood has all the problems we know, and another of which is that optimization is hard. She has taken all the relevant derivatives (analytically), but is stuck on initialization. We came up with some tricks for improving her initialization; this problem has enormous numbers of local optima!

We also spoke with Kareem El-Badry (Berkeley) about a project he is doing with Rix to find binary stars among the LAMOST spectra. Here the problem is that the binaries will not be resolved spectrally or spatially, so it is up to seeing that the one-d spectrum is better explained by two stars (at the same distance and metallicity) than one. He is finding (not surprisingly) that because the spectral models are not quite accurate enough, a mixture of two stars is almost always better than a single star fit. So he decided today to try implementing (his own, bespoke, version of) The Cannon. Then the model will (at least) be accurate in the spectral domain, which is what he needs.

I got started on a new project with Jessica Birky (UCSD) who is here at MPIA to work with me on M-dwarf spectra in the APOGEE project. Our first job is to find a training set of M dwarfs that have APOGEE spectra but also known temperatures and metallicities. That isn't trivial.

2017-06-29

summer plans

My last research day before heading to MPIA for the summer was taken up with many non-research things! However, I did have brief discussions with Lauren Anderson (Flatiron) about what is next for our collaboration, now that paper 1 is out!

2017-06-28

spin tagging of stars?

At the Stars group meeting, John Brewer (Yale) and Matteo Cantiello (Flatiron) told us about the Kepler / K2 Science meeting, which happened last week. Brewer was particularly interested in the predictions that Ruth Murray-Clay made for chemical abundance differences between big and small planet hosts; it is too early to tell how well these match on to the results Brewer is finding for chemical differences between stars hosting different kinds of exoplanet architectures.

Other highlights included really cool supernova light curves, with amazing details, Granulation or flicker estimates of delta-nu and nu-max, and a clear bimodality in planetary radii between super-earths and mini-neptunes. There was much discussion in group meeting of this latter result, both what it might mean, and what predictions it might generate.

Highlights for Cantiello included results on the inflation of short-period planets by heating by their host stars. And, intriguingly, a possible asteroseismic measurement of stellar inclinations. That is, you might be able to tell the projection of a star's spin angular momentum vector projected onto the line of sight. If you could (and if some results about aligned spin vectors in star-forming regions hold up) this could lead to a new kind of tagging for stars that are co-eval!

2017-06-27

global ozone

In the morning, researchers from across the Flatiron Institute gathered for a discussion of statistical inference, which is a theme that cuts across the different departments. Justin Alsing (Flatiron) led the discussion, asking for advice on his project to model global ozone over the last few decades. He has data that spans latitude, altitude, and time, and the ozone levels can be affected by many things other than long-term degradation by pollutants. So he wants to build a non-linear, data-driven model of confounders but still come to strong conclusions about the long-term trends. There was discussion of many relevant methods, including large linear models (regularized strongly), independent components analysis, latent variable models, neural networks, and so on. It was a wide-ranging and valuable discussion. The CCB at Flatiron has some valuable mathematics expertise, which could be important to all the Flatiron departments.

2017-06-26

statistics is hard

OMG much of my research time today was spent trying to figure out everything that is wrong with Section 7 (uncertainties in both x and y) of the Hogg, Bovy, and Lang paper on fitting a line. Warning to users: Don't use Section 7 until we update! The problems appeared early (see the GitHub issues on this Chapter), but came to a head when Dan Foreman-Mackey (UW) wrote this blog post. Oddly I disagree with Foreman-Mackey's solution, and I don't have consensus with Jo Bovy (Toronto) yet. It has something to do with how we take the limit to very large variance in our prior. But I must update the paper asap!

2017-06-22

the variance on the covariance of the variance

I had a long set of conversations with Boris Leistedt (NYU) about various matters cosmological. The most exciting idea we discussed comes from thinking about good ideas that Andrew Pontzen (UCL) and I discussed a few weeks ago: If you can cancel some kinds of variance in estimators by performing matched simulations with opposite initial conditions, might there be other families of matched simulations that can be performed to minimize other kinds of estimator variances?

For example, Leistedt wants to make a set of simulations that are good for estimating the covariance of a power-spectrum estimator in a real experiment. How do we make a set of simulations that get this covariance (which is the variance of a power spectrum, which is itself a variance) with minimum variance on that covariance (of that variance)? Right now people just make tons of simulations, with random initial conditions. You simply must be able to do better than pure random here. If we can do this well, we might be able to zero out terms in the variance (of the variance of the variance) and dramatically reduce simulation compute time. Time to hit the books!

2017-06-21

fast bar

Stars group meeting ended up being all about the Milky Way Bar. Jo Bovy (Toronto), many years ago, made a prediction about the velocity distribution as a function of position if the velocity substructure seen locally (in the Solar Neighborhood) is produced (in part) by a bar at the Galactic Center. The very first plate of spectra from APOGEE-South happens to have been taken in a region that critically tests this model. And he finds evidence for the predicted velocity structure! He finds that the best-fit bar is a fast bar (whatever that means—something about the rotation period). This is a cool result, and also a great use of the brand-new APOGEE-S data.

Bovy was followed by Sarah Pearson (Columbia) who showed the effects of a bar on the Pal-5 stream and showed that some aspects of its morphology could be explained by a fast bar. We weren't able to fully check whether both Bovy and Pearson want the exact same bar, but there might be a consistent story emerging.

2017-06-20

MCMC

The research highlight of the day was Marla Geha (Yale) dropping in to Flatiron to chat about MCMC sampling. She is working through the tutorial that Foreman-Mackey (UW) and I are putting together and she is doing the exercises.
I'm impressed! She gave lots of valuable feedback for our first draft.

2017-06-19

learning

I spent time working through the last bits of a paper by Dun Wang (NYU) about image modeling for time-domain astrophysics. I asked him to send it to our co-authors.

The rest of the day was spent in discussions of Bayesian inference with the Flatiron Astronomical Data Group reading group. We are doing elementary exercises in data analysis and yet we are not finding it easy to discuss and understand, especially some of the details and conceptual arguments. In other words: No matter how much experience you have with data analysis, there are always things to learn!

2017-06-16

cosmic rays, alien technology

I helped Justin Alsing (Flatiron) and Maggie Lieu (ESA) search for HST data relevant to their project for training a model to find cosmic rays and asteroids. They started to decide that HST's cosmic-ray identification methods that they are already using might be good enough to just rely upon, which drops their requirements down to asteroids. That's good! But it's hard to make a good training set.

Jia Liu (Columbia) swung by to discuss the possibility of finding things at exo-L1 or exo-L2 (or the other Lagrange points). Some of the Lagrange points are unstable, so anything we find would be clear signs of alien technology. We looked at the relevant literature; we may be fully scooped, but I think there are probably things to do still. One thing we discussed is the observability; it is somehow going to depend on the relative density of the planet and star!

2017-06-15

Bayesian basics; red clump

A research highlight today was the first meeting of our Bayesian Data Analysis, 3ed reading group. It lasted a lot longer than an hour! We ended up going off into a tangent on the Fully Marginalized Likelihood vs cross-validation and Bayesian equivalents. We came up with some possible research projects there! The rest of the meeting was Bayesian basics. We decided on some problems we would do in Chapter 2. I hate to admit that the idea of having a problem set to do makes me nervous!

In the afternoon, Lauren Anderson (Flatiron) and I discussed our project to separate red-clump stars from red-giant-branch stars in the spectral domain. We have two approaches: The first is unsupervised: Can we see two spectral populations where the RC and RGB overlap? The second is supervised: Can we predict relevant asteroseismic parameters ina training set using the spectra?

2017-06-14

cryo-electron-microscopy biases

At the Stars group meeting, I proposed a new approach for asteroseismology, that could work for TESS. My approach depends on the modes being (effectively) coherent, which is only true for short survey durations, where “short” can still mean years. Also, Mike Blanton (NYU) gave us an update on the APOGEE-S spectrograph, being commissioned now at LCO in Chile. Everything is nominal, which bodes very well for SDSS-IV and is great for AS-4. David Weinberg (OSU) showed up and told us about chemical-abundance constraints on a combination of yields and gas-recycling fractions.

In the afternoon I missed Cosmology group meeting, because of an intense discussion about marginalization (in the context of cryo-EM) with Leslie Greengard (Flatiron) and Marina Spivak (Flatiron). In the conversation, Charlie Epstein (Penn) came up with a very simple argument that is highly relevant. Imagine you have many observations of the function f(x), but for each one your x value has had noise applied. If you take as your estimate of the true f(x) the empirical mean of your observations, the bias you get will be (for small scatter in x) proportional to the variance in x times the second derivative of f. That's a useful and intuitive argument for why you have to marginalize.

2017-06-13

Renaissance

I spent the day at Renaissance Technologies, where I gave an academic seminar. Renaissance is a hedge fund that created the wealth of the Simons Foundation among many other Foundations. I have many old friends there; there are many PhD astrophysicists there, including two (Kundić and Metzger) I overlapped with back when I was a graduate student at Caltech. I learned a huge amount while I was there, about how they handle data, how they decide what data to keep and why, how they manage and update strategies, and what kinds of markets they work in. Just like in astrophysics, the most interesting signals are at low signal-to-noise in the data! Appropriately, I spoke about finding exoplanets in the Kepler data. There are many connections between data-driven astrophysics and contemporary finance.

2017-06-12

reading the basics

Today we decided that the newly-christened Astronomical Data Group at Flatiron will start a reading group in methods. Partially because of the words of David Blei (Columbia) a few weeks ago, we decided to start with BDA3, part 1. We will do two chapters a week, and also meet twice a week to discuss them. I haven't done this in a long time, but we realized that it will help our research to do more basic reading.

This week, Maggie Lieu (ESA) is visiting Justin Alsing (Flatiron) to work (in part) on Euclid imaging analysis. We spent some time discussing how we might build a training set for cosmic rays, asteroids, and other time-variable phenomena in imaging, in order to train some kind of model. We discussed the complications of making a ground-truth data set out of existing imaging. Next up: Look at what's in the HST Archive.

2017-06-11

summer plans

I worked for Hans-Walter Rix (MPIA) this weekend: I worked through parts of the After Sloan 4 proposal to the Sloan Foundation, especially the parts about surveying the Milky Way densely with infrared spectra of stars. I also had long conversations with Rix about our research plans for the summer. We have projects to do, and a Gaia Sprint to run!

2017-06-08

music and stars

First thing, I met with Schiminovich (Columbia), Mohammed (Columbia), and Dun Wang (NYU) to discuss our GALEX imaging projects. We decided that it is time for us to produce titles, abstracts, outlines, and lists of figures for our next two papers. We also realized that we need to produce pretty-picture maps of the plane survey data, and compare it to Planck and GLIMPSE and other related projects.

I had a great lunch meeting with Brian McFee (NYU) to catch up on his research (on music!) and ask his advice on various time-domain projects I have in mind. He has new systems to recognize chords in music, and he claims higher performance than previous work. We discussed time-series methods, including auto-encoders and HMMs. As my loyal reader knows, I much prefer methods that deal with the data probabilistically; that is, not methods that always require complete data without missing information, and so on. McFee had various thoughts on how we might adapt methods that expect complete data for tasks that are given incomplete data, like tasks that involve Kepler light curves.

2017-06-07

post-main-sequence stellar evolution

At Stars group meeting, Matteo Cantiello (Flatiron) had us install MESA and then gave us a tutorial on aspects of post-main-sequence evolution of stars. There were many amazing and useful things, and he cleared up some misconceptions I had about energy production and luminosity during the main-sequence and red-giant phases of stellar evolution. He showed some hope (because of convective-region structure, which in turn depends on opacity, which in turn depends on chemical abundances) that we might be able to measure some aspects of chemical abundances with asteroseismology in certain stellar types.

In the Cosmology group meeting, we discussed many topics, but once again I got fired up about automated methods or exhaustive methods of searching for (and analyzing) estimators, both for making measurements in cosmology, and for looking for anomalies in a controlled way (controlled in the multiple-hypothesis sense).
One target is the neutrino mass, which is in the large-scale structure, but subtly.

In the space between meetings, Daniela Huppenkothen (NYU) and I worked with Chris Ick (NYU) to get him started building a mean model of Solar flares, and looking at the power spectrum of the flares and their mean models. The idea is to head towards quantitative testing of quasi-periodic oscillation models.