2020-12-31

the selection function; talking it out

I had a quick call today with Hans-Walter Rix in which we had the millionth conversation about the selection function and how it is used in astronomy problems. And how it ought to be used. We resolved our differences, once again. I think it's important to take any project I'm working on and discuss it over and over again. I learned this from various mathematics colleagues, including Goodman (NYU), Greengard (Flatiron), and Villar (JHU). You don't really understand a project until you can rigorously describe it. And you can only rigorously describe it after quite a few trials and errors.

2020-12-30

discussions about large-scale structure

I had some work conversations today. One was with Kate Storey-Fisher (NYU) about our paper on a continuous-function estimator for the two-point correlation function of galaxies. We were discussing this related paper and how we might give a better or more intuitive mathematical derivation of our estimator. It is correct, but our argument is somewhat obtuse: It is that the estimator becomes Landy–Szalay for the top-hat-bin case, but is affine invariant. That is an odd argument!

I also had a long chat with Lehman Garrison (Flatiron) about N-body simulations, initial conditions, entropy, symmetries, reconstruction, halo occupation, and other matters related to the computational theory of cosmology. I'm hoping there's a project for us to do here.

2020-12-29

almost nothing

I did almost no research today, but I did do some thinking and writing towards a machine-learning project on dynamics and entropy.

2020-12-28

writing about uncertainties

I dug up my old project (from early in the pandemic) to write a note about how to estimate uncertainties on your measurements. I didn't write much today, but I did do some re-reading and made some notes to my future self.

2020-12-23

physical symmetries and machine learning

Soledad Villar (JHU) and I spent time today discussing symmetries and machine-learning methods. Like many astronomers, I'm interested in imposing deep physical symmetries on methods, figuring that if the methods respect the symmetries, they will learn much more relevant and useful things from their training data (and not spend their training information on learning the symmetries). An easy symmetry is the convolutional symmetry, but physics has far more: The laws of physics are permutation-equivariant (which in the ML business is graph symmetry), they are unitary (which in the ML business is (sometimes) normalizing), and they have rotation and translation and boost symmetries. We looked at papers on gauge invariant networks, graph networks, and hamiltonian networks. All extremely relevant! It seems like all the tools are in place to do something interesting in cosmology.

2020-12-22

NSF center proposal

I spent some time today discussing a possible NSF Center on real-time data analysis with Ashley Villar (Columbia) and Tyler Pritchard (NYU), based on the wide-ranging grass-roots interest we found in time-domain astrophysics we have discovered in NYC this pandemic. NSF Centers are big projects!

2020-12-21

finding outliers in housekeeping data

Because we found some apparent anomalies in the DR16 data, Anna-Christina Eilers (MIT) and I are looking at the noise properties of some of the SDSS-IV APOGEE stellar spectra. It's pretty inside baseball but we are comparing the MAD (my favorite statistic—the median absolute difference in brightness between neighboring spectral pixels) to the reported spectral signal-to-noise ratio and stellar parameters. We find that, as expected, MAD (on a normalized spectrum) decreases with SNR, but increases with metallicity and decreases with temperature (because more metal-rich and cooler stars have stronger lines). But there are outliers, where the SNR measurements appear completely wrong. We're following up now, but first: How to sensitively identify outliers in this space? It won't surprise my loyal reader that we tentatively decided to use nearest neighbors with a kd-tree.

2020-12-18

scope for a paper on regression and stellar spectral variability

As my loyal reader knows, Lily Zhao (Yale) has had great success with improving extreme precision radial-velocity measurements. In her latest work, she shows that she can use regression to predict radial velocity measurements from shape changes in the spectrum, independent of (or even orthogonal to) pure Doppler shifts. Under cross-validation, the regression can improve the scatter in the measured radial velocities. Today with Megan Bedell (Flatiron) we scoped a paper for these results. We want to show that the regression is a baby step towards something like doppler imaging spectroscopy, in which we would build a model of the full, rotating stellar surface in order to correct for it and see just the center-of-mass motion of the star. If we want to make that argument, we need a source for realistic simulated stellar spectra from a real, spotty, rotating star.

2020-12-17

forwards vs backwards modeling of light curves

My day started with a conversation with Gaby Contardo (Flatiron) about modeling light curves of stars. We have projects in which we try to predict forwards and backwards in time, and compare the results. We're trying to make a good scope for a paper, which could involve classification, regression, or causal inference. Or all three. As usual, we decided to write an abstract to help us picture the full scope of the first paper.

2020-12-16

classifying orbits with pair separations?

Today Adrian Price-Whelan (Flatiron) showed me a great idea for looking at stellar dynamics in the Milky Way, that I think is unique and new: Look at the statistics of close, comoving (but unbound) pairs of stars as a function of orbit. The idea is that orbits can be classified (continuously maybe) by how unbound pairs close in phase space separate over time, when their center of mass (say) is on that orbit. And, especially, there are very big differences between chaotic and regular orbits in this respect (it's almost the definition of chaotic). So maybe the distribution of comoving pair separations is a strong function of orbit? That would be awesome. And it relates to things we have thought about with cold stellar streams.

2020-12-15

unifying real-space and frequency-space large-scale structure

Today Kate Storey-Fisher (NYU) and I spoke about next projects to do in large-scale structure, now that her first paper on clustering is out. One project is to use the flexibility of her clustering estimator to look at large-scale gradients or variations in the large-scale structure (analagous to power asymmetries in the CMB, for example). We have Abby Williams (NYU) working on this now. Another is to use it to make real-space correlation-function components that are the (appropriate angular averages of) Fourier transforms of power-spectrum (harmonic) modes. If we do that, we can perfectly unify real-space and k-space approaches to large-scale structure, without making the (brutal frankly) approximations that they make in k-space projects. We will have a big carbon footprint, though!

2020-12-12

zeroth draft done!

Today I finished the zeroth draft of a (first-author; gasp!) paper about linear regression with large numbers of parameters. My co-author is Soledad Villar (JHU). The paper shows how—when you are fitting a flexible model like a polynomial or a Fourier series—you can have more parameters than data with no problem, and in fact you often do better in that regime, even in predictive accuracy for held-out data. It also shows that as the number of parameters goes to infinity, your linear regression becomes a Gaussian process if you choose your regularization correctly. It is designed to be like a textbook chapter so we are faced with the question: Where to publish (other than arXiv, which is a given).

2020-12-11

stellar surface imaging

Today Rachael Roettenbacher (Yale) gave the CCA Colloquium, on Doppler imaging and interferometry and other methods for imaging stellar surfaces. In conversations with Roettenbacher and others (including Lily Zhao at Yale and Megan Bedell and Rodrigo Luger at Flatiron), I'm coming around to the position that extremely precise radial-velocity measurements of stars will require accurate models of rotating, spotty, time-evolving stellar surfaces. At the end of her talk, there were some discussions about this point: Should EPRV surveys be associated with stellar surface imaging campaigns? Probably, if we want to characterize true Earth analogs!

2020-12-09

how clustered is the DM in the Milky Way halo?

Today David Spergel (Flatiron) came by to discuss the following question: How much do we know—empirically—about fluctuations or clustering in the dark matter distribution in the Milky Way halo? Spergel's idea is maybe to use old or metal-poor halo stars: Since stars form in the centers of their DM halos (we think), the clustering or fluctuations in phase space of the old stars should always be larger than the clustering or fluctuations in the dark matter. I bet that's true! And it's easy to test in standard simulations, I think.

2020-12-08

cross-validation, visualized

I have spent a lot of time in my life advocating cross-validation for model selection. That's sad, maybe? But for many reasons, I think it is much better than computing Bayesian evidences or fully-marginalized likelihoods (FMLs on this site!). Today, for the paper Soledad Villar (JHU) and I are writing, I made this figure, which demonstrates leave-one-out cross-validation. Each curve is a different leave-one-out fit, decorated with that fit's prediction for the left-out point. Instructive? I hope so.

2020-12-07

#NeurIPS2020 tutorial

Today Kate Storey-Fisher (NYU) and I led a tutorial at the beginning of the big NeurIPS machine-learning meeting. Our title was Machine learning for Astrophysics and Astrophysics Problems for Machine Learning. We used our forum to advertise astronomy problems to machine-learning practitioners: We astronomers have interesting, hard problems, and our data sets are free! We made public Jupyter notebooks that download data samples to give the crowd a taste of what's available, and how easy it is to get and munge into form.

Our slides are here and our four notebooks (for four different kinds of data) are here, here, here, and here. We got good audience interactions, including especially a group of great astronomers who helped answer questions in the chats!

2020-12-06

working on NeurIPS tutorial

I spent weekend reasearch time elaborating my code notebooks and my slides for the #NeurIPS2020 tutorial that Kate Storey-Fisher (NYU) and I are doing on Monday. I'm a bit stressed with the last-minute prep, but how else would I ever do this??

2020-12-04

group meeting awesome; color light curves from CoRoT

Today at the Astronomical Data Group meeting (led by Dan Foreman-Mackey) we did our quasi-monthly thing of getting a quick update from everyone who shows up. And 18 people showed up! Everyone gave an update; it was great to see the breadth of activity in the group. One contribution that got me excited was Christina Hedges (Ames, but still part of the Group!), who is looking at ESA CoRoT data. The mission was designed to have some tiny bit of color sensitivity, which makes it possible to look at colored light-curve variations and distinguish causal effects. This builds on work by Hedges to look at tiny point-spread-function changes in NASA Kepler and TESS data to get a tiny bit of color information in those white-light missions. Colored light curves are the future.

2020-12-03

Gaia EDR3

Today ESA Gaia EDR3 dropped! It was a fun day; the data are more precise and less noisy! I'm involved in a few different projects with the data. With Hunt (Flatiron) and Price-Whelan (Flatiron) I am looking at the local velocity-space structure in the disk, and seeing if we can classify features by looking at how they vary spatially around the Solar position. With Eilers (MIT) I am going to update our spectrophotometric distance estimates to APOGEE luminous red giants. With Bonaca (Harvard) I will find out if we can improve the kinematics and orbit identification of stellar streams. None of these projects got very far today, but we did make this visualization!

2020-12-02

preparing for a NeurIPS tutorial

Today Kate Storey-Fisher (NYU) and I got together and parallel-worked on our slides for our big #NeurIPS2020 tutorial next week. Somehow it is easier to work on things in parallel!

2020-12-01

constraining transformations to unit determinant

I learned a lot about linear algebra today! I learned that if you exponentiate a matrix (Yes, matrix exponentiation; if this makes you uncomfortable, think about the Taylor series for exponentiation. Do that with a matrix.), the determinant of the resulting matrix is the exponential of the trace of the exponent matrix. So if you need unit-determinant matrices, you can make them by multiplying together exponents of traceless matrices.

Why do I care about all this? Because Jason Hunt (Flatiron), Adrian Price-Whelan (Flatiron), and I realized yesterday that we need to make some of our transformation matrices volume-preserving. This is in our MySpace project for ESA Gaia EDR3 that finds a data-driven, transformation of phase-space to emphasize velocity structure. And I learned that Jax (the simple auto-differentiation tool for numpy and scipy) knows about matrix exponentiation.

I give thanks to Soledad Villar (JHU) for these insights about linear algebra.