2019-12-31

#AAS235 poster on demodulation

Today I helped Abby Shaum (NYU) finish her poster for the #AAS235 Meeting on using signal processing—and in particular phase demodulation—to find binary companions to stars with coherent asteroseismic modes in the NASA Kepler Mission data. Our abstract ended up being different from what we submitted! I hope no-one minds. But she has a beautiful result, and a pretty mechanistic method to search for binaries. Now our next thing is to figure out how to generalize the method to modes that aren't coherent over the full mission lifetime. If we can do that, there is a lot more we can find, I suspect.

2019-12-30

writing

My only real research today is that I got a bit more writing done for Adrian Price-Whelan's paper on binary stars in APOGEE DR16. I worked at the top of the paper (introduction) and the bottom (discussion), my two favorite places!

2019-12-29

binary stars in APOGEE

Adrian Price-Whelan (Flatiron) has a draft of a paper I'm pretty excited about: It is running The Joker, our custom sampler for the Kepler two-body problem, on every star (that meets some criteria) in the APOGEE DR16 sample. This method treats binary-companion discovery and binary-companion orbit characterization as the same thing (in the spirit of things I have done in exoplanet discovery and characterization). We find and characterize confidently 20,000 binaries across the H–R diagram with lots of variations of binary fraction with temperature, logg, and luminosity. And we release samplings for everything, even the things that don't obviously have companions. Well it doesn't count as research if I'm merely excited about a paper! But I did do a tiny bit of writing in the paper, making it my first research day in a while. Yesterday I tweeted this figure from the manuscript.

2019-12-20

CHIPPR

My former student Alex Malz (Bochum) has been back in town to finish one of the principal papers from his thesis, the CHIPPR method for combining probabilistic outputs from cosmology surveys. Our first target was redshift probability distributions: Many surveys are producing not just photometric redshift estimates, but full probability distributions over redshift. How to use these? Many ways people naively use them (like stack them, or bin them) are incorrect, probabilistically. If you want a redshift distribution, you need to perform a hierarchical inference using them. This, in turn, requires that you can convert them into likelihoods. That's not trivial for many reasons. Our first paper on this is somewhat depressing, because of the requirements both on the redshift producer, and on the redshift user. However, it's correct! And performs better than any of the unjustified things people do (duh). I signed off on the paper today; Malz will submit during the break.

2019-12-19

housekeeping data; Milky Way

Last night or this morning (not sure which) Lily Zhao (Yale) made some nice discoveries about the EXPRES instrument: She found that the housekeeping data (various temperature sensors, chilled water load, etc) correlate really well with the instrument calibration state as defined by our hierarchical (principal-component) model. That's interesting, because it opens up the possibility that we could interpolate the non-parametric calibration parameters in the housekeeping space rather than the time space. That would be cool. Indeed, the time is just a component of the housekeeping data!

In the afternoon, Kathryn Johnston (Columbia) hosted a group of people for the L2G2—the local Local Group group—for talks and discussions. There were many good discussions, led by Jason Hunt (Flatiron). Highlights for me included, first, work on detailed abundances with the cannon by Adam Wheeler (Columbia), who did a great job of describing (and re-implementing) the method, and extending it to do better things. Another was a talk on SAGA by Marla Geha, who showed that the Local Group satellites might be group satellites rather than galaxy satellites. This in the sense that they look more like the satellites of a more massive object than satellites of Milky Way or M31 analogs. It was a great day of great discussions.

2019-12-18

EXPRES operational data analysis

at Stars & Exoplanets meeting today, Debra Fischer (Yale) and Ryan Petersburg (Yale) came in to tell us about the EXPRES instrument for extreme precision radial-velocity and the Hundred Earths long-term observing project. Fischer talked about how they do their RV analysis in spectral chunks, and the issue (and maybe the issue of EPRV in general) is how to combine those chunks, such that the most informative chunks get the most weight but such that the outliers get captured. In the discussion we also discussed micro-tellurics which might need to be modeled or masked. Petersburg discussed the question of how you define signal-to-noise ratio for a spectrum. It's not a well-posed problem! But it matters, because the contemporary instruments expose until they hit some SNR threshold.

2019-12-17

GPs for p-modes in RV data

I checked in with Megan Bedell (Flatiron) on our projects today. She showed really nice results in which she fits simulated radial-velocity data for a star that is oscillating in finite-coherence asteroseismic modes. My loyal reader knows that we have been working on this! But the cool thing is that she can mow fit the oscillations with a Gaussian Process with a kernel that is roughly correct, or exactly correct, even when the observations are integrated over finite exposure times. That's a breakthrough. It depends in large part on the magic of exoplanet.

Now GPs are extremely flexible, so the question is: How to validate results? After all, any GP can thread through any set of points. We came up with two schemes. The first is a N-fold cross-validation, in which we train the GP on all but 1/N-th of the data and then predict that 1/N-th, and cycle to get everything. First experiments along these lines seem to show that the more correct the kernel, the better we predict! The second is that we make fake data that includes the p-modes and a simulated planet companion. We show that our planet-companion inferences become more accurate as our kernel becomes more accurate.

We're hoping to improve on the results of this paper on p-mode mitigation. My conjecture is that when we use an accurate GP kernel, we will get exoplanet inferences at any exposure time that are better than one gets using the quasi-optimal exposure times and a “jitter” term to account for the residual p-mode noise.

2019-12-16

what is a tensor?

It's finals week, so not much research. But I did get in a useful conversation with Soledad Villar (NYU) about the definition of a tensor (as opposed to a matrix, say). She had a different starting point than me! But we converged to similar things. You can think of it in coordinate-free or coordinate-transform ways, you can think of it in operator terms, and you can think of it as being composed of sums of outer products of vectors. I always thought of a tensor as being a ratio of vectors, in some sense, but that's a very hand-wavey idea.

2019-12-13

#MLringberg2019, day 5

Today was the final day of Machine Learning Tools for Research in Astronomy. I gave my talk, which was about causal structure. One thing I talked about is the strong differences (stronger than you might think) between generative and discriminative directions for machine learning. Another thing I talked about is the way that machine-learning methods can be used to denoise, deconvolve, and separate signals when they are designed with good causal structure.

Right after me, Timmy Gebhard (MPI-IS and ETH) gave an absolutely excellent talk about half-sibling regression ideas related to instrument calibration (think Kepler, TESS, and direct imaging). He beautifully explained exoplanet direct imaging and showed how his improvements to how they are using the data change the results. He doesn't have the killer app yet, but he is spending the time to think about the problem deeply. And if he switched from one-band direct imaging to imaging spectroscopy (which is the future!) I think his methods will kill other methods. He also spoke about the causal-inference philosophy behind his methods really well.

My talk slides are here and I also led the meeting summary discussion. My summary slides are here. The summary discussion was valuable. In general, the size and style of the meeting—and location in lovely Ringberg Castle—led to a great environment and culture at the meeting. Plus with great social engineering by Ntampaka, Nord, Pillepich, and Peek. The latter made progress on a community-driven set of Ringberg Recommendations, which might end up as a long-term outcome of the meeting.

2019-12-12

#MLringberg2019, day 4

Today was the fourth day of Machine Learning Tools for Research in Astronomy. Some of my personal highlights were the following:

Tomasz Kacprzak (ETH) showed that he can improve the precision of cosmological inferences by using machine learning to develop new statistics of cosmological surveys to compare to simulations. His technology is nearly translationally invariant, but not guaranteeed to be perfect, and not guaranteed to be rotationally symmetric (or rotationally and translationally covariant, I should say). So I wondered if any of the increased precision he showed might be coming from properties of the data that are not consistent with our symmetries. That is, precision might increase even if the features being used are not appropriate, given our assumptions. I'd love to have a playground for thinking about this more. It relates to ideas of over-fitting and adversaries we discussed earlier in the week.

Luisa Lucie-Smith (UCL) showed some related things, but more along the lines of finding interpretable latent variables that bridge the connection between the cosmological initial conditions and the dark-matter halos that are produced. I love that kind of approach! Can we use machine learning to understand the systems better? Her talk led to some controversy about how autoencoders (and the like) could be structured for interpretability. As my loyal reader knows, I don't love the “halo” description of cosmology; this could either elucidate it or injure it.

Doug Finkbeiner (Harvard) showed how he can patch and find outliers in massive spectroscopy data sets using things that aren't even machine learning, according to many in the room! That was fun, and probably very useful. This all connects to a theme of the meeting so far, which is using machine learning to aid in visualization and data discovery.

In between sessions we had a great conversation about student mentoring. This is a great idea at a workshop, where there are both students and mentors, and the participants have gotten to know one another. Related to this, Brian Nord (Fermilab) gave a nice talk about relationships between what we have been thinking about in machine learning and work in the area of science, technology, and society. He's trying to build new scientific communities, but in a research-based way. And I mean social-science-research-based. That's radical, and more likely to succeed than many of the things we physicists do without looking to our colleagues.

2019-12-11

#MLringberg2019, day3

Today was the third day of Machine Learning Tools for Research in Astronomy. Some random personal highlights were the following:

Prashin Jethwa (Vienna) showed first results from a great project to model 2-d imaging spectroscopy of galaxies as a linear combination of different star-formation histories on different orbits. People have modeled galaxies as superpositions of orbits. And as superpositions of star-formation histories. But the problem is linear in both, so much more can be done. My only criticism (and to be fair, Jethwa made it himself) is that they collapse the spectroscopy to kinematic maps before starting. In my opinion, the most interesting information will be in the spectral–kinematic joint domain, because different lines (which are sensitive to different star-formation histories and different element abundance ratios) will have different shapes in different parts of each galaxy. Exciting that this is happening now; it has been a dream for years.

Francois Lanusse (Berkeley) and Greg Green (MPIA) both gave talks that were aligned strongly with my interests. Lanusse is taking galaxy generators (like VAEs and GANs) and adding causal structure (like projection onto the sky plane, pixelization, and noisification) so that the generators produce something closer to the true galaxies, and something not exactly the same as the data. That's exciting and a theme I have been talking about in this forum for a while. For Lanusse, galaxies are nuisances, to be marginalized out as we infer the cosmic weak-lensing shear maps.

In a completely different domain, Green is modeling stars as coming from a compact color—magnitude diagram but then being reddened and attenuated by dust. He is interested in the dust not the stars, so for him the CMD is a nuisance, as galaxies are for Lanusse. That makes it a great object to model with something ridiculous, like a neural net. He is living the dream I have of using the heavy learning machinery only for the nuisance parts of the problem, and reserving belief-consistent physical models for the important parts. Green was showing work that is only a few days old! But it looks very very promising.

2019-12-10

#MLringberg2019, day 2

Today was the second day of Machine Learning Tools for Research in Astronomy. Two personal highlights were the following:

Soledad Villar (NYU) spoke about adversarial attacks against machine-learning methods used in astrophysics. Her talk was almost entirely conceptual; she talked about what constitutes a successful attack, and how you find it. My expectation is that these attacks will be very successful, as my loyal reader knows! The examples she showed were from stellar spectroscopy. Her talk was interrupted and followed by extremely lively discussion, in which the room disagreed about what the attacks mean about a method, and whether they are revealing or important. That was some fun controversy for the meeting.

Tobias Buck (AIP) looked at methods to translate from image to image (like horse to zerbra!) but in the context of two-d maps of galaxies. Like translate from photometric images into maps of star formation and kinematics. It's a promising area, although the training data are all simulations at this point. I asked him whether he could translate from a two-d galaxy image into a three-d dark-matter map. He was skeptical, because the galaxy is so much smaller than its dark-matter halo.

At one of the coffee breaks, Josh Peek (STScI) proposed that we craft some kind of manifesto or document that helps practitioners in machine learning in astronomy make good choices, which are pragmatic (because it is important that machine learning be used and tried) but also involve due diligence (to avoid the “just throw machine learning at it” problem in some of the literature). He had the idea that we have the right people at this meeting to make something like this happen. I noted that we tried to do things like that in our tutorial paper on MCMC sampling, where we try to both be pragmatic but also recommend achievable best practices. The challenge is to be encouraging and supportive, but also draw some lines in the sand.

2019-12-09

#MLringberg2019, day 1

Today was the first day of Machine Learning Tools for Research in Astronomy in Ringberg Castle in Germany. The meeting is supposed to bring together astronomers working with new methods and also applied methodologists to make some progress. There will be a different mix of scientific presentations, participant-organized discussions, and unstructured time. Highly biased, completely subjective highlights from today included the following:

Michelle Ntampaka (Harvard) showed some nice results on using machine-learning discriminative regressions to improve cosmological inferences (the first of a few talks we will have this week along these lines). She emphasized challenges and lessons learned, which was useful to the audience. Among these, she emphasized the value she found in visualizing the weights of her networks. And she gave us some sense of her struggles with learning rate schedule, which I think is probably the bane of almost every machine learner!

Tom Charnock (Paris) made an impassioned argument that the outputs of neural networks are hard to trust if you haven't propagated the uncertainties associated with the finite information you have been provided about the weights in training. That is, the weights are a point estimate, and they are used to make a point estimate. Doubly bad! He argued that variational and Bayesian generalizations of neural networks do not currently meet the criteria of full error propagation. He showed some work that does meet it, but for very small networks, where Hamiltonian Monte Carlo has a shot of sampling. His talk generated some controversy in the room, which was excellent!

Morgan Fouesneau (MPIA) showed how the ESA Gaia project is using ideas in machine learning to speed computation. Even at one minute per object, they heat up a lot of metal for a long time! He showed that when you use your data to learn a density in the data space for different classes, you can make inferences that mitigate or adjust for class-imbalance biases. That's important, and it relates to what Bovy and I did with quasar target selection for SDSS-III.

Wolfgang Kerzendorf (Michigan State) spoke about his TARDIS code, which uses machine learning to emulate a physical model and speed it up. But he's doing proper Bayes under the hood. One thing he mentioned in his talk is the “expanding photosphere method” to get supernova distances. That's a great idea; whatever happened to that?

2019-12-06

I've got 99 dust-reddening problems

I arrived in Heidelberg today, for a fast visit before a week at Ringberg Castle for a meeting on machine learning and astrophysics. I had two long conversations about science projects, one with Neige Frankel (MPIA), and one with Greg Green (MPIA). Frankel is trying to make comprehensive maps of the Milky Way with red-clump stars from SDSS-IV APOGEE and ESA Gaia data. She is using a big linear model to calibrate the distance variations with color, magnitude, and dust. But it seems to have problems at high reddening. We found that some of those problems were an artifact of sample cuts based on Gaia uncertainties. My loyal reader knows that such cuts are dangerous! But still some odd problems remain: Bugs or conceptual issues?

Green is trying to infer the dust extinctions to stars as a function of three-dimensional position in the galaxy with reduced or no dependence on models of the stellar color–magnitude diagram. He is using a neural network to model this nuisance. My loyal reader knows that this is my dream: To only use these complex methods on the parts of our problems that are nuisances!

2019-12-05

new methods for cosmology

Kate Storey-Fisher (NYU) and I had a lunch-time conversation about her project to replace the standard large-scale-structure correlation-function estimator with a new estimator that doesn't require binning the galaxy pairs into separation bins. It estimates continuous functions. We discussed how to present such a new idea to a community that has been using the same binned estimator since the 1990s, and even before that they were only marginally different. That is, the change Storey-Fisher proposes is the biggest change to correlation-function estimation since it all started, in my (somewhat not humble) opinion.

But this creates a problem: How to convince the cosmologists that they need to learn new tricks? We have many arguments, but which one is strongest? We no longer need to bin, and binning is sinning! Or: We can capture more functional variation with fewer degrees of freedom, so we reduce simulation requirements! Or: We can restrict the function space to smooth functions, so we regularize away unphysical high-frequency components! Or: We get smaller uncertainties on the clustering at every scale! Or: We can make our continuous function components be similar to derivatives of the correlation function with respect to cosmological parameters and therefore create clustering statistics that are close to Fisher-optimal given the data and the model!

Writing methodological papers is not easy.

2019-12-04

LFC calibration accuracy; stars form in lines?

Lily Zhao (Yale) and I got to the point that we can calibrate one laser-frequency-comb exposure on EXPRES with a calibration based on all the LFC exposures. We find that we predict the lines in each exposure to a mean (over all lines) of about 3 cm/s! If this holds up to cross-validation, I am happy!

In Stars & Exoplanets Meeting at Flatiron, many fun things happened. Zhao showed us temperature variations observed spectroscopically in Sun-like stars: Are these due to spots? And Marina Kounkel (WWU) showed us incredible visualizations of young stars, showing that they are very clustered in kinematics and age. She believes that they form in lines, and then the lines break up. Her interactive plots were amazing!

2019-12-03

black holes and nucleosynthesis

Today Selma de Mink (Harvard) gave a great and energizing Astrophysics Seminar at NYU. She talked about many things related to the extremely massive-star progenitors of the estremely massive black holes being observed in merger by LIGO. One assumption of her talk, which is retrospectively obvious but was great, is that the vast majority of LIGO events should be first-generation mergers. A second merger is very unlikely, dynamically. But that wasn't her point: Her point was that the masses that LIGO sees will constrain how very massive stars evolve. In particular, she showed that there is a strong prediction of a mass gap: There can't be black holes formed by stellar evolution in the mass range 45 to 150 solar masses. The physics is all about pair-instability supernovae from very low-metallicity stars. But the details of this black-hole mass gap depend on some nuclear reaction rates, so she concludes that LIGO will make nucleosynthetic measurements! The LIGO data probably already do. It's a new world!

2019-12-02

astronomical adversaries: we are go!

The NYU Center for Data Science has a big masters program, and as a part of that, students do capstone research projects with industry and academic partners. I am one of the latter! So Soledad Villar (NYU) and I have been advising two groups of capstone students in projects. One of these groups (Teresa Ningyuan Huang, Zach Martin, Greg Scanlon, and Eva Shuyu Wang) has been working on adversarial attacks against regressions in astronomy. The work is new in part because it brings the idea of attacks to the natural science domain, and because attacks haven't been really defined for regression contexts. Today we decided that this work is ready (enough) to publish. So we are going to try to finish and submit something for a conference deadline this week!