2018-02-28

#siRTDM18, day 3

I arrived in Berkeley last night and today was my first day at a full-week workshop on real-time decision-making at the Simons Institute for the Theory of Computing at UC Berkeley. The day started with amazing talks about Large Hadron Collider hardware and software by Caterina Doglioni (Lund) and Benjamin Nachman (LBNL). The cuts from collisions to disk-writing is a factor of 10 million, and they are writing as fast as they can.

The triggers (that trigger a disk-writing event) are hardware-based close to the metal, and then software-based in a second layer. This means that when they upgrade the triggers, they are often doing hardware upgrades! Some interesting things came up, including the following:

Simulating is much slower than the real world, so months of accelerator run-time requires years of computing on enormous facilities just for simulation. These simulations need to be sped up, and machine-learning emulators are very promising. Right now events are stored in full, but only certain reconstructed quantities are used for analysis; in principle if these quantities could be agreed-upon and computed rapidly, the system could store less per event and then many more events, reducing the insanity of the triggers. And every interesting (and therefore triggered, saved) event is simultaneous with many uninteresting events, so in principle right now the system saves a huge control sample, which hasn't been fully exploited, apparently.

Of course the theme of the meeting is decision-making. So much of the discussion was about how you run these experiments so that you decide to keep the events that will turn out to be most interesting, when you don't really know what you are looking for!

2018-02-27

data predicting data; bad Solar System

First thing in the morning, I met with Judy Hoffman (Berkeley) to discuss her computer-vision and machine-learning work. She suggested that machine-learning methods that are auto-encoder-like could be repurposed to make predictions from one kind of data to another kind of data on the same object. For instance, we could train an encoder to predict exoplanet RV signal, given Kepler light curve. Or etc! This appeals to me because it uses machine learning to connect data to data, without commitment to latent quantities or true labels for anything. She pointed me (relatedly) to a new kind of model called ADDA, for which she is responsible.

In the afternoon, Chiara Mingarelli (Flatiron) gave the NYU Astro Seminar about pulsar timing and gravitational radiation, expressing the hope and expectation that this method will deliver signals soon. She told a very interesting story about a false-positive detection that nearly went to press when they figured out that it was resulting from residuals in the Solar System ephemerides. The SS comes in because you have to correct Earth-bound timings to a frame that is at rest (or constant velocity with respect to) the SS barycenter.

This isn't the first time I have heard this complaint. The astronomical community really needs an open-source and probabilistic SS ephemeris, so we can use the SS model responsibly inside of inferences. Freedom-of-information act time?

2018-02-26

adaptive observing programs

When God provides a meeting-free gap in the day, it is incumbent on the astronomer to use that time to do research. I stole time out of time today to work on preparation for my tess.ninja projects. My plan is to look forward to algorithmic approaches to adaptive observing campaigns, so that an exoplanet follow-up campaign from the ground can be simultaneously efficient at confirming true planets, measuring planet properties, and rejecting false positives, but also be useful for long-term future statistical projects. In general statistical usability and efficiency are at odds! These ideas are related to active learning but also decision theory. One question: Would a ground-based telescope time-allocation committee accept an active-learning proposal?

And I also did some information-theory-related math for Christina Eilers (MPIA): She is building latent-variable models for APOGEE spectra, but working in a low-dimensional basis that is a linear projection of the data; she needs measurement uncertainty estimates in the low-dimensional basis.

2018-02-23

resonances!

[No posts for a while: I was on a short break.]

A huge highlight today was in parallel-working meeting, where Elisabeth Andersson (NYU) got her code working to plot Kepler light curves folded on various periods. In particular, periods that are integer ratios of known exoplanet periods. We are going to search for resonant signals. And warning: We are even going to look for 1:1 resonances, which might have escaped detection previously! We did various hacks to flatten the heck out of the light curves, which we might come to regret if we don't come back to them.

2018-02-15

negatory

Tried to write; total fail. Doing stuff for my two jobs. Not complaining! Just not researching, not today.

2018-02-14

comoving and coeval stars; and the pre-infall Sagittarius

At Gaia DR2 prep meeting, I discussed comoving stars and related matters with Oh and Price-Whelan. We discussed moving from our work (in DR1) that made use of marginalized likelihoods for catalog generation to a parameter-estimation method. What would that look like? As my loyal reader knows, I prefer parameter-estimation methods, for both pragmatic and philosophical reasons. But once you go to parameter-estimation methods, there are lots of parameters you could in principle estimate. For example: You can look at the space-time event to which the two stars made their closest approach in the past, and how far apart they would be at that point. If the separation is small, then coeval? That might be much more interesting than co-moving, in the long run.

At Stars group meeting, Allyson Sheffield (CUNY) and Jeff Carlin (LSST) showed us results on abundances of M-type giant stars in the Sagittarius tidal streams. They can clearly see that the progenitor of the stream had element-abundance gradients in it prior to tidal stripping. They also show that the stream matches onto the abundances trend of the Sagittarius dwarf body. But the coolest thing they showed is that there are two different types of alpha elements, which they called explosive and hydrostatic, and the two types have different trends. I need to check this in APOGEE! Sheffield also mentioned some (possibly weak) evidence that the bifurcation in the stream is not from multiple wraps of the stream but rather because the object that tidally shredded was a binary galaxy (galaxy and satellite) pair! I hope that's true, because it's super cool.

2018-02-13

writing on dimensionality

Because of work Bedell did (on a Sunday!) in support of the Milky Way Mapper meeting, I got renewed excitement about our element-abundance-space dimensionality and diversity work: She was able to show that we can see aspects of the low dimensionality of the space in the spectra themselves, mirroring work done by Price-Jones (Toronto) in APOGEE, but with more specificity about the abundance origins of the dimensionality. That got me writing text in a document. As my loyal reader knows, I am a strong believer in writing text during (not after) the data-analysis phases. I'm also interested in looking at information-theoretic or prediction or measurement approaches to dimensionality.

2018-02-12

FML, and the Big Bounce

The day started with a realization by Price-Whelan (Princeton) and me that, in our project The Joker, because of how we do our sampling, we have everything we need at the end of the sampling to compute precisely the fully marginalized likelihood of the input model. That's useful, because we are not just making posteriors, we are also making decisions (about, say, what to put in a table or what to follow up). Of course (and as my loyal reader knows), I don't think it is ever a good idea to compute the FML!

At lunch, Paul Steinhardt (Princeton) gave a great black-board talk about the idea that the Universe might have started in a bounce from a previously collapsing universe. His main point (from my perspective; he also has particle-physics objectives) is that the work that inflation does with a quantum mechanism might be possible to achieve with a classical mechanism, if you could design the bounce right. I like that, of course, because I am skeptical that the original fluctuations are fundamentally quantum in nature. I have many things to say here, but I'll just say a few random thoughts: One is that the strongest argument for inflation is the causality argument, and that can be achieved with other space-time histories, like a bounce. That is, the causality (and related problems) are fundamentally about the geometry of the space and the horizon as a function of time, and there are multiple possible universe-histories that would address the problem. So that's a good idea. Another random thought is that there is no way to make the bounce happen (people think) without violating the null-energy condition. That's bad, but so are various things about inflation! A third thought is that the pre-universe (the collapsing one) probably has to be filled with something very special, like a few scalar fields. That's odd, but so is the inflaton! And those fields could be classical. I walked into this talk full of skepticism, and ended up thinking it's a pretty good program to be pursuing.

2018-02-11

welcome to the Milky Way Mapper

Today was the (unfortunately Sunday) start to the first full meeting of the Milky Way Mapper team, where MWM is a sub-part of the proposed project SDSS-V, of which I will be a part. It was very exciting! The challenge is to map a large fraction of the Milky Way in red-giant stars (particularly cool, luminous giants), but also get a full census of binary stars in different states of evolution, and follow up exoplanets and other scientific goals. Rix was in town, and pointed out that the survey needs a description that can be stated in two sentences. Right now it is a mix of projects, and doesn't have a description shorter than two dense slides! But it's really exciting and will support an enormous range of science.

There were many highlights of the meeting for me, most of them about technical issues like selection function, adaptive survey design, and making sensitive statistical tests of exoplanet systems. There was also a lot of good talk about how to do non-trivial inferences about binary-star populations with very few radial-velocity measurements per star. That is where Price-Whelan and I shine! Another subject that I was excited about is how one can design a survey that is simultaneously simple to operate but also adaptive as it goes: Can we algorithmically modify what we observe and when based on past results, increase efficiency (on, say, binary stars or exoplanets), but nonetheless produce a survey that is possible to model and understand for population statistics? Another subject was validation of stellar parameter estimates: How to know that we are getting good answers? As my loyal reader can anticipate, I was arguing that such tests ought to be made in the space of the data. Can they be?

2018-02-09

warps and other disk modes

Adrian Price-Whelan (Princeton) and Chervin Laporte (Victoria) convened a meeting at Flatiron today to discuss the outer disk. It turned into a very pleasurable free-for-all in part because Kathryn Johnston (Columbia) came down and Sergey Koposov (CMU) was in town for it! We argued about what are the best tracers for fast or early Gaia DR2 results on the warp and other outer-disk structure, which looks non-trivial and interesting. One thing I proposed, which I would like to think about more, is taking the disk-warping simulations of Laporte and using them to inspire or generate a set of basis functions for disk modes in which expected warps and wiggles are compactly described. Then we could fit the Gaia data with these modes and have a regularized but non-parametric model of the crazy.

Late in the day, Ana Bonaca (Harvard) and I walked through our full paper and results on the information in streams with Johnston and Price-Whelan. They gave us lots of good feedback on how to present our results and what to emphasize.

2018-02-08

dust

The highlight of my low-research day was a great seminar by Eddie Schlafly (LBL) about Milky Way dust. He showed that he can build three-dimensional models (and maybe four-dimensional, because radial-velocities are available) from PanSTARRS and APOGEE data (modeling stellar spectra and photometry) and he showed that he can even map the extinction curve in three dimensions! That reveals new structures. It is very exciting that in the near future we might be able to really build a dynamical model of the Milky Way with dust as a kinematic tracer. Also interesting to think about the connection to CMB missions. He showed a ridiculous Planck polarization map that I hadn't seen before: It looks like a painting!

2018-02-07

Gaia helpdesk and optimized photometry and various

We got way too many applications for the #GaiaSprint. This is a great problem to have, although it is giving me an ulcer: Almost every applicant is obviously appropriate for the Sprint and should be there! So the SOC discussed ways we could expand the Sprint but maintain its culture of intimacy and fun.

In Gaia DR2 prep workshop, we discussed our preparations for joining the Kepler data (and especially the whole KIC) with the data from Gaia DR2. We are hoping to have this done within minutes of the data release, making use of the high-end ESA data systems. This activity resulted in the submission of a trouble ticket to the Gaia helpdesk.

At stars group meeting, way too much happened to report. But Ben Pope (NYU) showed that his work on using L1 to regularize the optimization of photometric apertures works extremely well in some cases, but is very brittle, for reasons we don't yet understand. Simon J Murphy (Sydney) started to talk about what he and Foreman-Mackey (Flatiron) have achieved in his week-long visit but he got side-tracked (by me) onto how awesome delta-Scuti stars are and somehow why. And Ana Bonaca (Harvard) gave an overview of what we are doing with stellar streams.

2018-02-06

spectral representation; purely geometric spectroscopic parallaxes

Today was a low-research day! Research was pretty-much limited to a (great) call with Rix (MPIA) and Eilers (MPIA). We discussed several important successes of Eilers's work on latent-variable models. One is that she finds that she can improve the performance of The Cannon operating on stellar spectra if she reduces the dimensionality of the stellar spectra before she starts! That's crazy; how can you throw away information and do better? I think the answer must have something to do with model wrongness: The model is wrong (as all models are), and it is probably less wrong in the projected space than it was in the original pixel basis. This all relates to data representation issues that I have worried about (but done nothing about) before.

Another important success is that Eilers can run the Gaussian-Process latent-variable model on the dimensionality-reduced space much, much faster than the original data space, and not only does it do better than it did before, it does better than The Cannon. That's great, but it isn't just performance we are looking for: The GPLVM has better model structure, such that we can infer labels without having training data that have nuisance parameter labels. That is, we can make a predictive model for the interesting subspace of the label space. This is tremendously important going in to Gaia DR2, because we want to train a spectroscopic parallax method using only geometric inputs: No stellar models, ever!

2018-02-05

information in stellar streams

Ana Bonaca (Harvard) arrived in town for a week of hacking on our stream-information project. She spent today getting more streams in to the analysis. The point of the project is not to model each stream in detail, but rather to examine, using Fisher Information, the information that each stream (or any combination of streams) brings to the measurement of gravitational-potential parameters. We worked also on paper scope and our original goal (way long ago) of constraining the mass and orbit of the LMC.

2018-02-02

adversarial approaches to everything

Today's parallel-working session at NYU was a dream. Richard Galvez (NYU) is working with Rob Fergus (NYU) to train a generative adversarial network on images of galaxies. One issue with these GANs is that a GAN can do well making fake data in a subspace of the whole data space, and still do well, adversarially. So Galvez is using a clustering (k-means) in the data space, and looking at the populations of the clusters in the true data and in the generated data, to see that coverage is good. This is innovative, and important if we are going to use these GANs for science.

Kate Storey-Fisher (NYU) is making something like adversarial (there's that word again) mock catalogs for large-scale structure projects: She is going to make the selection function in each patch of the survey a nonlinear function of the housekeeping data (point-spread function, stellar density, transparency, season, and so on) we have for that patch. Then we can see what LSS statistics are robust to the crazy. These mocks will be adversarial in the sense that they will represent a universe that is out to trick us, while GANs are adversarial in the sense that they use an internal competitive game for training.

And as I was explaining why I am disappointed with the choices that LSST has made for broad-band filters, Alex Malz (NYU) and I came up with an inexpensive and executable proposal that would satisfy me and improve LSST. It involves inexpensive and easy-to-make stochastically ramped filters. I don't think there is an iceball's chance in hell that the Collaboration would even for a moment consider this plan, but the proposal is a good one. I guess this is adversarial in a third sense!

2018-02-01