2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!

2017-04-11

self-calibration

I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!

2017-04-10

causal photometry

Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.

I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.

2017-04-07

searches for cosmological estimators

I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.

In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.

2017-04-06

direct detection of the cosmic neutrino background

Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:

Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!

Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.

One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).

2017-04-05

a stellar stream with only two stars? And etc

In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).

Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.

Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.

In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.

2017-04-04

words on a plane

On the plane home, I worked on my similarity-of-vectors (or stellar twins) document. I got it to the first-draft stage.

2017-04-03

how to add and how to subtract

My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)

On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)

2017-04-02

John Bahcall (and etc)

I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.

Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.

The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!

(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.

2017-03-31

the future of astrophysical data analysis

Dan Foreman-Mackey (UW) crashed NYC today, surprising me, and disrupting my schedule. We began our day by arguing about the future of hierarchical modeling. His position is (sort-of) that the future is not hierarchical Bayes as it is currently done, but rather that we will be doing things that are much more ABC-like. That is, astrophysics theory is (generally) computational or simulation-based, and the data space is far too large for us to understand densities or probabilities in the data space. So we need ways to responsibly use simulations in inference. Right now the leading method is what is called (dumbly) ABC. I asked: So, are we going to do CMB component separation at the pixel level with ABC? This seems impossible at the present day, and DFM's pointed out that ABC is best when precision requirements are low. When precision requirements are high, there aren't really options that have computer simulations inside the inference loop!

Many other things happened today. I spent time with Lauren Anderson (Flatiron), validating and inspecting the output of our parallax inferences. I spent a phone call with Fed Bianco (NYU) talking about how to adapt Gaussian Processes to make models of supernovae light curves. And Foreman-Mackey and I spent time talking about linear algebra, and also this blog post, with which we more-or-less agree (though perhaps it doesn't quite capture all the elements that contribute (positively and negatively) to the LTFDFCF of astronomers!).

2017-03-30

linear algebra; huge models

I had a long conversation today with Justin Alsing (Flatiron) about hierarchical Bayesian inference, which he is thinking about (and doing) in various cosmological contexts. He is thinking about inferring a density field that simultaneously models the galaxy structures and the weak lensing, to do a next-generation (and statistically sound) lensing tomography. His projects are amazingly sophisticated, and he is not afraid of big models. We also talked about using machine learning to do emulation of expensive simulations, initial-conditions reconstruction in cosmology, and moving-object detection in imaging.

I also spent time playing with my linear algebra expressions for my document on finding identical stars. Some of the matrices in play are low-rank; so I ought to be able to either simplify my expressions or else simplify the number of computational steps. Learning about my limitations, mathematically! One thing I re-discovered today is how useful it is to use the Kusse & Westwig notation and conceptual framework for thinking about hermitian matrices and linear algebra.

2017-03-29

ergodic stars; blinding and pre-registration

In the Stars group meeting, Nathan Leigh (AMNH) and Nick Stone (Columbia) spoke about 4-body scattering or 2-on-2 binary-binary interactions. These can lead to 3-1, 2-2, and 2-1-1 outcomes, with the latter being most common. They are using a fascinating and beautiful ergodic-hypothesis-backed method (constrained by conservation laws) to solve for the statistical input–output relations quasi-analytically. This is a beautiful idea and makes predictions about star-system evolution in the Galaxy.

In the Cosmology group meeting, Alex Malz (NYU) led a long and wide-ranging discussion of blinding (making statistical results more reliable by pre-registering code or sequestering data). The range of views in the room was large, but all agreed that you need to be able to do exploratory data analysis and also protect against investigator bias. My position is we better be doing some form of blinding for our most important questions, but I also think that we need to construct these methods to permit people to play with the data and permit public data releases that are uncensored and unmodified. One theme which came up is that astronomy's great openness is a huge asset here. Fundamentally we are protected (in part) by the availability of the data to re-analysis.

2017-03-28

Local Group on the Local Group

Today, Kathryn Johnston (Columbia) organized a “Local Group on the Local Group” meeting at Columbia. Here are some highlights:

Lauren Anderson (Flatiron) gave an update on her data-driven model of the color–magnitude diagram of stars. This led to a conversation about which features in her deconvolved CMD are real? And are there too many red-clump stars given the total catalog size?

Steven Mohammed (Columbia) showed our GALEX Galactic-Plane survey data on the Gaia TGAS stars. The GALEX colors look very sensitive to metallicity and possibly other abundances. The audience suggested that we look at the full dependences on metallicity and temperature and surface gravity to see if we can break all degeneracies. This led to more discussion of the use of the Red Clump stars for Galactic science.

Adrian Price-Whelan (Columbia) presented a puzzle about the Galactic globular cluster system, which he has been thinking about. Are the distant clusters accreted? The in-situ formation hypothesis is unpalatable (it had to be many clusters at early times; should be many thin streams); the accreted hypothesis over-produces the smooth component of the stellar halo (unless dwarf galaxies had far more GCs per unit stellar mass in the past). These problems can be resolved, but only with strong predictions.

Yong Zheng (Columbia) spoke about the gaseous Magellanic stream and associated (or plausibly associated) high-velocity clouds. Many of the challenges in interpretation connect to the problem that we don't know where the gas is along the line of sight. She showed really nice data on something called Wright’s Cloud. For this huge structure—and for the stream as a whole—there is little to no associated stellar component.

Nicola Amorisco (Harvard) Showed theoretical simulations of the accreted part of the MW (and MW-like-galaxy) halo, with the goal of finding stellar-halo observables that strongly co-vary with the assembly history of the dark-matter halo. Both theory and observations suggest large scatter in halo properties at Milky-Way-like masses, and much less scatter at higher masses (because of central-limit-like considerations). His results are promising for understanding the MW assembly history.

Glennys Farrar (NYU) spoke about the MW magnetic field, using rotation measures and CMB to constrain the model. She showed UHECR deflections in the inferred magnetic field, and also discussed implications of her results for electron and cosmic-ray diffusion. There are also tantalizing implications for the synchrotron spectrum and CMB component separation. One interesting comment: If her results are right for the scale and amplitude of the field, there are serious questions about origin and generation; is it primordial or generated on scales much larger than the galaxy?

2017-03-27

identical, statistically speaking

My research activity today was to re-write, from scratch (well, I really started yesterday) my document on how you tell whether two noisily measured objects are identical. This is an old and solved problem! But I am writing the answer in astronomer-friendly form, with a few astronomy-related twists. I have no idea whether this is a paper, a section of a paper, or something else. My re-write was caused by the algebra I learned from Leistedt, and the customers (so to speak) of the document are Rix, Ness, and Hawkins, all of whom are thinking about finding identical pairs of stars.

2017-03-24

how to write an April Fools' paper

I had a great visit to the University of Toronto Department of Astronomy and Astrophysics (and Dunlap Institute) today. I had great conversations about scintillometry (new word?) and the future of likelihood functions and component separation in the CMB. I also discussed pairwise velocity differences in cosmology, and probabilistic supernova classification. There is lots going on. I gave my talk on The Cannon, in which I was perhaps way too pessimistic about chemical tagging!

Early in the day, I ate Toronto-style (no, not Montreal-style) bagels with Dustin Lang (Toronto) and discussed many of the things we like to discuss, like finding very faint outer-Solar-System objects in all the data Lang wrangles, like the differences between accuracy and precision, and even how to define accuracy in astrophysics, and like April Fools' papers, which have to meet four criteria:

  1. conceptually interesting inference
  2. extremely challenging computation
  3. no long-term scientific value to the specific results found
  4. non-irrelevant mention of April 1 in abstract
It is a brutal set of requirements but we have met them two times. I think this year is out (because of criterion 2), but maybe 2018?

2017-03-23

math with Gaussians

My one piece of research news today was an email exchange with Boris Leistedt (NYU) in which he completely took me to school on math with Gaussians. My intuition (expressed this week) that there was an easier way to do all the operations I was doing was right! But everything else I was doing was not wrong but wrong-headed. Anyways, this should simplify some things right away. The key observation is that a product of Gaussians can be transformed into another product of Gaussians, in another basis, trivially. More soon!

2017-03-22

circular reasoning, continuity of globular clusters

In Stars group meeting, Lauren Anderson (Flatiron) showed our toy example that demonstrates why our method for de-noising the Gaia TGAS data works. That led to some useful conversation that might help us explain our project better. I didn't take all the notes I should have! One idea that came up is that if there are two populations, one only seen at very low signal-to-noise, then that second population can easily get pulled in to the first. Another is the question of the circularity of the reasoning. Technically, our reasoning is circular, but it wouldn't be if we marginalized out the hyper-parameters (that is, the parameters of our color–magnitude diagram).

Also in the Stars meeting, Ruth Angus (Columbia) suggested how we might responsibly look for the differences in exoplanet populations with stellar age. And Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) described their very successful observing run to follow up the comoving stellar pairs. Preliminary analyses suggest that many of the pairs (which we found only with transverse information) are truly comoving.

In Cosmology group meeting, Jeremy Tinker discussed the possibility of using halo-occupation-like approaches to determine how the globular cluster populations of galaxies form and evolve. This led to a complicated and long discussion, with many ideas and issues arising. I do think that various simple scenarios could be ruled out, making use of some kind of continuity argument (with sources and sinks, of course).

I spent some time hidden away working on multiplying and integrating Gaussians. I am doing lots of algebra, completing squares. I have the tiniest suspicion that there is an easier way, or that all of the math I am doing has a simple answer at the end, that I could have seen before starting?

2017-03-21

half-pixel issues; building our own Gibbs sampler

First thing in the morning I met with Steven Mohammed (Columbia) and Dun Wang (NYU) to discuss GALEX calibration and imaging projects. Wang has a very clever astrometric calibration of the satellite, built by cross-correlating photons with the positions of known stars. This astrometric calibration depends on properties of the photons for complicated reasons that relate to the detector technology on board the spacecraft. Mohammed finds, in an end-to-end test of Wang's images, that there might be half-pixel issues in our calibration. We came up with methods for tracking that down.

Late in the day, I met with Ruth Angus (Columbia) to discuss the engineering in her project to combine all age information (and self-calibrate all methods). We discussed how to make a baby test where we can do the sampling with technology we are good at, before we write a brand-new Gibbs sampler from scratch. Why, you might ask, would any normal person write a Gibbs sampler from scratch when there are so many good packages out there? Because you always learn a lot by doing it! If our home-built Gibbs doesn't work well, we will adopt a package.

2017-03-20

statistics questions

I spent time today writing in the method section of the Anderson et al paper. I realized in writing it that we have been thinking about our model of the color–magnitude diagram as being a prior on the distance or parallax. But it isn't really, it is a prior on the color and magnitude, which for a given noisy, observed star, becomes a prior on the parallax. We will compute these implicit priors explicitly (it is a different prior for every star) for our paper output. We have to describe this all patiently and well!

At some point during the day, Jo Bovy (Toronto) asked a very simple question about statistics: Why does re-sampling the data (given presumed-known Gaussian noise variances in the data space) and re-fitting deliver samples of the fit parameters that span the same uncertainty distribution as the likelihood function would imply? This is only true for linear fitting, of course, but why is it true (and no, I don't mean what is the mathematical formula!)? My view is that this is (sort-of) a coincidence rather than a result, especially since it (to my mind) confuses the likelihood and the posterior. But it is an oddly deep question.

2017-03-16

a prior on the CMD isn't a prior on distance, exactly

Today my research time was spent writing in the paper by Lauren Anderson (Flatiron) about the TGAS color–magnitude diagram. I think of it as being a probabilistic inference in which we put a prior on stellar distances and then infer the distance. But that isn't correct! It is an inference in which we put a prior on the color–magnitude diagram, and then, given noisy color and (apparent) magnitude information, this turns into an (effective, implicit) prior on distance. This Duh! moment led to some changes to the method section!

2017-03-15

what's in an astronomical catalog?

The stars group meeting today wandered into dangerous territory, because it got me on my soap box! The points of discussion were: Are there biases in the Gaia TGAS parallaxes? and How could we use proper motions responsibly to constrain stellar parallaxes? Keith Hawkins (Columbia) is working a bit on the former, and I am thinking of writing something short with Boris Leistedt (NYU) on the latter.

The reason it got me on my soap-box is a huge set of issues about whether catalogs should deliver likelihood or posterior information. My view—and (I think) the view of the Gaia DPAC—is that the TGAS measurements and uncertainties are parameters of a parameterized model of the likelihood function. They are not parameters of a posterior, nor the output of any Bayesian inference. If they were outputs of a Bayesian inference, they could not be used in hierarchical models or other kinds of subsequent inferences without a factoring out of the Gaia-team prior.

This view (and this issue) has implications for what we are doing with our (Liestedt, Hawkins, Anderson) models of the color–magnitude diagram. If we output posterior information, we have to also output prior information for our stuff to be used by normals, down-stream. Even with such output, the results are hard to use correctly. We have various papers, but they are hard to read!

One comment is that, if the Gaia TGAS contains likelihood information, then the right way to consider its possible biases or systematic errors is to build a better model of the likelihood function, given their outputs. That is, the systematics should be created to be adjustments to the likelihood function, not posterior outputs, if at all possible.

Another comment is that negative parallaxes make sense for a likelihood function, but not (really) for a posterior pdf. Usually a sensible prior will rule out negative parallaxes! But a sensible likelihood function will permit them. The fact that the Gaia catalogs will have negative parallaxes is related to the fact that it is better to give likelihood information. This all has huge implications for people (like me, like Portillo at Harvard, like Lang at Toronto) who are thinking about making probabilistic catalogs. It's a big, subtle, and complex deal.

2017-03-14

snow day

[Today was a NYC snow day, with schools and NYU closed, and Flatiron on a short day.] I made use of my incarceration at home writing in the nascent paper about the TGAS color–magnitude diagram with Lauren Anderson (Flatiron). And doing lots of other non-research things.

2017-03-13

toy problem

Lauren Anderson (Flatiron) and I met early to discuss a toy model that would elucidate our color–magnitude diagram model project. Context is: We want to write a section called “Why the heck does this work?” in our paper. We came up with a model so simple, I was able to implement it during the drinking of one coffee. It is, of course, a straight-line fit (with intrinsic width, then used to de-noise the data we started with).

planning a paper sprint, completing a square

Lauren Anderson (Flatiron) are going to sprint this week on her paper on the noise-deconvolved color–magnitude diagram from the overlap of Gaia TGAS, 2MASS, and the PanSTARRS 3-d dust map. We started the day by making a long to-do list for the week, that could end in submission of the paper. My first job is to write down the data model for the data release we will do with the paper.

At lunch time I got distracted by my project to find a better metric than chi-squared to determine whether two noisily-observed objects (think: stellar spectra or detailed stellar abundance vectors) are identical or indistinguishable, statistically. The math involved completing a huge square (in linear-algebra space) twice. Yes, twice. And then the result is—in a common limit—exactly chi-squared! So my intuition is justified, and I know where it will under-perform.

2017-03-10

the Milky Way halo

At the NYU Astro Seminar, Ana Bonaca (Harvard) gave a great talk, about trying to understand the dynamics and origin of the Milky Way halo. She has a plausible argument that the higher-metallicity halo stars are the halo stars that formed in situ and migrated out, while the lower-metallicity stars were accreted. If this holds up, I think it will probably test a lot of things about the Galaxy's formation, history, and dark-matter distribution. She also talked about stream fitting to see the dark-matter component.

On that note, we started a repo for a paper on the information theory of cold stellar streams. We re-scoped the paper around information rather than the LMC and other peculiarities of the Local Group. Very late in the day I drafted a title and abstract. This is how I start most projects: I need to be able to write a title and abstract to know that we have sufficient scope for a paper.

2017-03-09

The Cannon and APOGEE

I discussed some more the Cramér-Rao bound (or Fisher-matrix) computations on cold stellar streams being performed by Ana Bonaca (Harvard). We discussed how things change as we increase the numbers of parameters, and designed some possible figures for a possible paper.

I had a long phone call with Andy Casey (Monash) about The Cannon, which is being run inside APOGEE2 to deliver parameters in a supplemental table in data release 14. We discussed issues of flagging stars that are far from the training set. This might get strange in high dimensions.

In further APOGEE2 and The Cannon news, I dropped an email on the mailing lists about the radial-velocity measurements that Jason Cao (NYU) has been making for me and Adrian Price-Whelan (Princeton). His RV values look much better than the pipeline defaults, which is perhaps not surprising: The pipeline uses some cross-correlation templates, while Cao uses a very high-quality synthetic spectrum from The Cannon. This email led to some useful discussion about other work that has been done along these lines within the survey.

2017-03-08

does the Milky Way disk have spiral structure?

At stars group meeting, David Spergel (Flatiron) was tasked with convincing us (and Price-Whelan and I are skeptics!) that the Milky Way really does have spiral arms. His best evidence came from infrared emission in the Galactic disk plane, but he brought together a lot of relevant evidence, and I am closer to being convinced than ever before. As my loyal reader knows, I think we ought to be able to see the arms in any (good) 3-d dust map. So, what gives? That got Boris Leistedt (NYU), Keith Hawkins (Columbia), and me thinking about whether we can do this now, with things we have in-hand.

Also at group meeting, Semyeong Oh (Princeton) showed a large group-of-groups she has found by linking together co-moving pairs into connected components by friends-of-friends. It is rotating with the disk but at a strange angle. Is it an accreted satellite? That explanation is unlikely, but if it turns out to be true, OMG. She is off to get spectroscopy next week, though John Brewer (Yale) pointed out that he might have some of the stars already in his survey.

2017-03-07

finding the dark matter with streams

Today was a cold-stream science day. Ana Bonaca (Harvard) computed derivatives today of stream properties with respect to a few gravitational-potential parameters, holding the present-day position and orientation of the stream fixed. This permits computation of the Cramér-Rao bound on any inference or estimate of those parameters. We sketched out some ideas about what a paper along these lines would look like. We can identify the most valuable streams, the streams most sensitive to particular potential parameters, the best combinations of streams to fit simultaneously, and the best new measurements to make of existing streams.

Separately from this, I had a phone conversation with Adrian Price-Whelan (Princeton) about the point of doing stream-fitting. It is clear (from Bonaca's work) that fitting streams in toy potentials is giving us way-under-estimated error bars. This means that we have to add a lot more potential flexibility to get more accurate results. We debated the value of things like basis-function expansions, given that these are still in the regime of toy (but highly parameterized toy) models. We are currently agnostic about whether stream fitting is really going to reveal the detailed properties of the Milky Way's dark-matter halo. That is, for example, the properties that might lead to changes in what we think is the dark-matter particle.

2017-03-06

LMC effect on streams; dust corrections

Ana Bonaca (Harvard) showed up for a week of (cold) stellar streams inference. Our job is either to resurrect her project to fit multiple streams simultaneously, or else choose a smaller project to hack on quickly. One thing we have been discussing by email is the influence of the LMC (and SMC and M31 and so on) on the streams. Will it be degenerate with halo quadrupole or other parameters? We discussed how we might answer this question without doing full probabilistic inferences: In principle we only need to take some derivatives. This is possible, because Bonaca's generative stream model is fast. We discussed the scope of a minimum-scope paper that looks at these things, and Bonaca started computing derivatives.

Lauren Anderson (Flatiron) and I looked at her dust estimates for the stars in Gaia DR1 TGAS. She is building a model of the color–magnitude diagram with an iterative dust optimization: At zeroth iteration, the distances are (generally) over-estimated; we dust-correct, fit the CMD, and re-estimate distances. Then we re-estimate dust corrections, and do it again. The dust corrections oscillate between over- and under-corrections as the distances oscillate between over- and under-estimates. But it does seem to converge!

2017-03-03

similarities of stars; getting started in data science

I met with Keith Hawkins (Columbia) in the morning, to discuss how to find stellar pairs in spectroscopy. I fundamentally advocated chi-squared difference, but with some modifications, like masking things we don't care about, removing trends on length-scales (think: continuum) that we don't care about, and so on. I noted that there are things to do that are somewhat better than chi-squared difference, that relate to either hypothesis testing or else parameter estimation. I promised him a note about this, and I also owe the same to Melissa Ness (MPIA), who has similar issues but in chemical-abundance (rather than purely spectral) space. Late in the day I worked on this problem over a beer. I think there is a very nice solution, but it involves (as so many things like this do) a non-trivial completion of a square.

In the afternoon, I met with my undergrad-and-masters research group. Everyone is learning how to install software, and how to plot spectra, light curves, and rectangular data. We talked about projects with the Boyajian Star, and also with exoplanets in 1:1 resonances (!).

2017-03-02

D. E. Shaw

The research highlight of my day was a trip to D. E. Shaw, to give an academic seminar (of all things) on extra-solar planet research. I was told that the audience would be very mathematically able and familiar with physics and engineering, and it was! I talked about the stationary and non-stationary Gaussian Processes we use to model stellar (stationary) and spacecraft (non-stationary) variability, how we detect exoplanet signals by brute-force search, and how we build and evaluate hierarchical models to learn the full population of extra-solar planets, given noisy observations. The audience was interactive and the questions were on-point. Of course many of the things we do in astrophysics are not that different—from a data-analysis perspective—from things the hedge funds do in finance. I spent my time with the D. E. Shaw trying to understand the atmosphere in the firm. It seems very academic and research-based, and (unlike at many banks), the quantitative researchers run the show.

2017-03-01

fitting stellar spectra and deblending galaxy images

Today was group meetings day. In the Stars meeting, John Brewer (Yale) told us about fitting stellar spectra with temperature, gravity, and composition, epoch-by-epoch for a multi-epoch radial-velocity survey. He is trying to understand how consistent his fitting is, what degeneracies there are, and whether there are any changes in temperature or gravity that co-vary with radial-velocity jitter. No results yet, but we had suggestions for tests to do. His presentation reinforced my idea (with Megan Bedell) to beat spectral variations against asteroseismological oscillation phase.

In the Cosmology meeting, Peter Melchior (Princeton) told us about attempts to turn de-blending into a faster and better method that is appropriate for HSC and LSST-generation surveys. He blew us away with a tiny piece of deep HSC imaging, and then described a method for deblending that looks like non-negative matrix factorization, plus convex regularizations. He has done his research on the mathematics around convex regularizations, reminding me that we should do a more general workshop on these techniques. We discussed many things in the context of Melchior's project; one interesting point is that the deblending problem doesn't necessarily require good models of galaxies (Dustin Lang and I always think of it as a modeling problem); it just needs to deliver a good set of weights for dividing up photons.

2017-02-28

#DtU17, day two

Today I dropped in on Detecting the Unexpected in Baltimore, to provide a last-minute talk replacement. In the question period of my talk, Tom Loredo (Cornell) got us talking about precision vs accuracy. My position is a hard one: We never have ground truth about things like chemical abundances of stars; every chemical abundance is a latent variable; there is no external information we can use to determine whether our abundance measurements are really accurate. My view is that a model is accurate only inasmuch as it makes correct predictions about qualitatively different data. So we are left with only precision for many of our questions of greatest interest. More on this in some longer form, later.

Highlights (for me; very subjective) of the days' talks were stories about citizen science. Chris Lintott (Oxford) told us about tremendous lessons learned from years of Zooniverse, and the non-trivial connections between how you structure a project and how engaged users will become. He also talked about a long-term vision for partnering machine learning and human actors. He answered very thoughtfully a question about the ethical aspects of crowd-sourcing. Brooke Simmons (UCSD) showed us how easy it is to set up a crowd-sourcing project on Zooniverse; they have built an amazingly simple interface and toolkit. Steven Silverberg (Oklahoma) told us about Disk Detective and Julie Banfield (ANU) told us about Radio Galaxy Zoo. They both have amazing super-users, who have contributed to published papers. In the latter project, they have found (somewhat serendipitously) the largest radio galaxy ever found! One take-away from my perspective is that essentially all of the discoveries of the Unexpected have happened in the forums—in the deep social interaction parts of the citizen-science sites.

2017-02-27

galaxy masses; text as data

After a morning working on terminology and notation for the color–magnitude diagram model paper with Lauren Anderson (Flatiron), I went to two seminars. The first was Jeremy Tinker (NYU) talking about the relationship between galaxy stellar mass and dark-matter halo mass as revealed by fitting of number-count and clustering data in large-scale structure simulations. He finds that only models with extremely small scatter (less—maybe far less—than 0.18 dex) are consistent with the data, and that the result is borne out by follow-ups with galaxy–galaxy lensing and other tests. This is very hard to understand within any realistic model for how galaxies form, and constitutes a new puzzle for standard cosmology plus gastrophysics.

In the afternoon there was a very wide-ranging talk by Mark Drezde (JHU) on data-science methods for social science, intervention in health issues, and language encoding. He is interested in taking topic models and either deepening them (to make better features) or else enriching their probabilistic structure. It is all very promising, though these subjects are—despite their extreme mathematical sophistication—in their infancy.

2017-02-26

one paragraph per day

[I have been on vacation for a week.]

All I have done in the last week is (fail to) keep up with email (apologies y'all) and write one paragraph per day in the nascent paper with Lauren Anderson (Flatiron) about our data-driven model of the color–magnitude diagram. The challenge is to figure out what to emphasize: the fact that we de-noise the parallaxes, or the fact that we can extend geometric parallaxes to more distant stars, or the fact that we don't need stellar models?

2017-02-17

57 elements; research meetings

Today the astro seminar was given by Or Graur (CfA). He spoke about various discoveries he and collaborators have made in type Ia supernovae. For me, the most exciting was the discovery of atomic-mass-57 elements, which he can find by looking at the late-time decay: The same way we identify the mass-56 elements from timing supernovae decays at intermediate times, he finds the mass-57 elements. The difference is that they are at much later times (decay times in the years). He pointed out a caveat, which is that the late-time light curve can also be affected by unresolved light echoes. That's interesting and got me thinking (once again) about all the science related to light echoes that might be under the radar right now.

I hosted today my first-ever undergraduate research meeting. I got together undergraduates and pre-PhD students who are interested in doing research, and we discussed the Kepler and APOGEE data. My plan (and remember, I like to fail fast) is to have them work together on overlapping projects, so they all have coding partners but also their own projects. With regular meetings, it can fit into schedules and become something like a class!

2017-02-15

stellar twins and stellar age indicators

In the stars group meeting at CCA, Keith Hawkins (Columbia) blew us away with examples of stellar twins, identified with HARPS spectra. They were chosen to have identical derived spectroscopic parameters in three or four labels, but were amazingly identical at signal-to-noise of hundreds. He then showed us some he found in the APOGEE data, using very blunt tools to identify twins. This led to a long discussion of what we could do with twins, and things we expect to find in the data, especially regarding failures of spectroscopic twins to be identical in other respects, and failures of twins identified through means other than spectroscopic to be identical spectroscopically. Lots to do!

This was followed by Ruth Angus (Columbia) walking us through all the age-dating methods we have found for stars. The crowd was pretty unimpressed with many of our age indicators! But they agreed that we should take a self-calibration approach to assemble them and cross-calibrate them. It also interestingly connects to the twins discussion that preceded. Angus and I followed the meeting with a more detailed discussion about our plans, in part so that she can present them in a talk in her near future.

2017-02-14

abundance dimensionality, optimized photometric estimators

Kathryn Johnston (Columbia) organized a Local-Group meeting of locals, or a local group of Local Group researchers. There were various discussions of things going on in the neighborhood. Natalie Price-Jones (Toronto) started up a lot of discussion with her work on the dimensionality of chemical-abundance space, working purely with the APOGEE spectral data. That is, they are inferring the dimensionality without explicitly measuring chemical abundances or interpreting the spectra at all. Much of the questioning centered on how they know that the diversity they see is purely or primarily chemical rather than, say, instrumental or stellar nuisances.

At lunch time there were amusing things said at the Columbia Astro Dept Pizza Lunch. One was a very nice presentation by Benjamin Pope (Oxford) about how to do precise photometry of saturated stars in the Kepler data. He has developed a method that fully scoops me in one of my unfinished projects: The OWL, in which the pixel weights used in his soft-aperture aperture photometry are found through the optimization of a (very clever, in Pope's case) convex objective function. After the Lunch, we discussed a huge space of generalizations, some in the direction of more complex (but still convex) objectives, and others in the direction of train-and-test to ameliorate over-fitting.

2017-02-13

JWST opportunity

Benjaming Pope (Oxford) arrived in New York today for a few days of visit, to discuss projects of mutual interest, with the hope of starting collaborations that will continue in his (upcoming) postdoc years. One thing we discussed was the JWST Early Release Science proposal call. The idea is to ask for observations that would be immediately scientifically valuable, but also create good archival opportunities for other researchers, and also help the JWST community figure out what are the best ways to make best use of the spacecraft in its (necessarily) limited lifetime. I am kicking around four ideas, one of which is about photometric redshifts, one of which is about precise time-domain photometry, one of which is about exoplanet transit spectroscopy, and one of which is about crowded-field photometry. The challenge we face is: Although there is tons of time to write a proposal, letters of intent are required in just a few weeks!

2017-02-10

nucleosynthesis and stellar ages

Benoit Coté (Victoria & MSU) came to NYU for the day. He gave a great talk about nucleosynthetic models for the origin of the elements. He is building a full pipeline from raw nuclear physics through to cosmological simulations of structure formation, to get it all right. There were many interesting aspects to his talk and our discussions afterwards. One was about the i-process, intermediate between r and s. Another was about how r-process elements (like Eu) put very strong constraints on the rate at which stars form within their gas. Another was about how we have to combine nucleosynthetic chemistry observations with other kinds of observations (of, say, the PDMF, and neutron-star binaries, and so on) to really get a reliable and true picture of the nucleosynthetic story.

Late in the afternoon, I met with Ruth Angus (Columbia) to further discuss our project on cross-calibrating (or really, self-calibrating) all stellar age indicators. We wrote down some probability expressions, designed a rough design for the code, and discussed how we might structure a Gibbs sampler for this model, which is inherently hierarchical. We also drew a cool chalk-board graphical model (in this tweet), which has overlapping plates, which I am not sure is permitted in PGMs?

2017-02-09

making our own Gaia pipeline

My writing today was in the introduction to the paper Lauren Anderson (Flatiron) and I are writing about the color-magnitude diagram and statistical shrinkage in the Gaia TGAS—2MASS overlap. My view is that the idea behind the project is the same as the fundamental idea behind the Gaia Mission: The astrometry data (the parallaxes) give distances to the nearby stars; these are used to calibrate spectrophotometric models, which deliver distances for the (far more numerous) distant stars. Our goal is to show that this can be done without any involvement of stellar physics or physical models of stellar structure, evolution, or photospheres.

2017-02-08

velocities for APOGEE stars

At stars group meeting, run by Lauren Anderson (Flatiron), new graduate student Jason Cao (NYU) showed us his work on measuring radial velocities for individual-visit APOGEE spectra. He has a method for determining the radial velocity that does not involve interpolating either the data or the model. During his presentation, Jo Bovy (Toronto) pointed out that, actually, the APOGEE team appears to do an interpolation of the data after the one-d spectral extraction. That's unfortunate! But anyway, we have a method that doesn't involve any interpolation which could be used on a survey that doesn't ever do interpolation before or after extraction. And yes, you can extract a spectrum on any wavelength grid you like, from any two-d data you like, without doing interpolation! The group-meeting attendees had many constructive comments for Cao.

2017-02-07

tuesday lunch, fundamental physics

I spent the day at Princeton, hosted by Scott Tremaine (IAS). Tuesday lunch is still alive and well in Princeton, though I was shocked to find it happening in the Princeton Physics Department's Jadwin Hall. One beautiful result shown at the lunch was presented by Kento Masuda (Princeton), looking at hot exoplanets with eccentric outer companions. He has two examples that show dramatic transit timing and duration change events, presumably caused by a conjunction near the outer planet's periastron. The data are incredible and he generates a very informative (think: narrow) posterior on the outer planet's properties, despite the fact that the outer planet is not directly observed at all (and has a many-year period).

I spent most of my research time with Price-Whelan (Princeton) and Tremaine, discussing projects on the go. We spent a lot of time talking about whether it will be possible to learn fundamental things about the dark matter by building dynamical models of the stellar motions in the Milky Way. Tremaine came up with lots of reasons to be skeptical! However, if the dark matter doesn't annihilate (and even whether or not it is found in an underground lab), dynamics will be our only real tool. So I am confused. To me, it is much more interesting to model the dynamics of the Milky Way if it will tell us what the dark matter is than if it will tell us nothing more than some details about our contingent collapse and assembly history within a generic dark-matter scenario.

Getting even more philosophical, Tremaine and I discussed the question: What astronomy projects are purely descriptive of the "weather" of the Universe, and what projects get at fundamental physical processes? Even stronger: What astronomy projects might lead to changes to our beliefs about the fundamental physics itself? And how important is that, anyway? Revealing our prejudices, we both wanted to say that the most important areas of astronomy are those that might lead to changes in our beliefs about fundamental physics. But then we both wanted to say that exoplanet science is super-interesting! How to resolve this? Or is there a conflict?

2017-02-06

#GaiaMission selection functions?

The only research today was discussion of projects with Daniela Huppenkothen (NYU), Lauren Anderson (Flatiron), and Jo Bovy (Toronto). One subject of conversation was the need for selection functions in analyzing Gaia data, both now and in the future. Bovy is working on a selection function for Gaia DR1 TGAS and we discussed how we might generate a selection function for the final Gaia data release. I have a plan, but it involves making a simulated Gaia mission to get it started.

2017-02-03

#JudyFest, day 3

Today was the third and final day of The Galactic Renaissance. Rosie Wyse (JHU) and Branimir Sesar (MPIA) both showed evidence for vertical ripples going outwards in the Milky Way disk. These could plausibly be raised by an encounter with Sagittarius or something similar. However, Sesar argued that the amplitude is too large to be anything reasonable in the Local Group. That suggests that maybe the evidence isn't secure?

Raja GuhaThakurta (UCSC) mentioned the argument that the halo is worth observing because you can see the accretion history, at least in principle. There were talks after his by Sales (UCR), Lee (Vanderbilt), and Bonaca (CfA) on the observed and simulated properties of our halo.

Phil Hopkins (Caltech) and Yves Revaz (EPFL) gave impressive galaxy simulation results. Hopkins's renderings are just the bomb, and we discussed them in some detail afterwards. Hopkins claimed that low-mass galaxies (at least star-forming ones) are always so far out of steady-state, you can never measure their masses using virial or other steady-state indicators. He also brought up the point that the dust in the ISM has different dynamics than the molecular gas, and therefore there might be insane separation of material as stars form. I also discussed that with him afterwards.

2017-02-02

#JudyFest, day 2

Today was the second day of The Galactic Renaissance. Two scientific themes of the day were globular-cluster star abundance patterns, and stellar models that account for 3-d and non-thermal-equilibrium (NLTE) effects. On the former, it was even suggested by one speaker that the existence of chemical-abundance variations of certain kinds might be part of the definition of a globular cluster! There are some extreme cases, and various claims that the most extreme examples might be the stripped centers of ancient accreted galaxies!

On the stellar modeling front, there were impressive demonstrations from Frebel (MIT), Bergemann (MPIA), and Thygesen (Caltech) that improving the realism of the physical inputs to stellar models improves their precision and their accuracy. Thygesen did a very nice thing of using (relatively cheap) 1-D models to inform functional forms for interpolation across grid points of a (relatively expensive) 3-D model grid. That got me interested in thinking about physics-motivated or physics-constrained interpolation methods, which could have value in lots of domains.

In a session about Judy's scientific and intellectual life, Steve Shectman (OCIW) described what the world was like in 1967, when Judy Cohen (Caltech) started graduate school. It was a time of optimisim, disruption, and violence. This resonated with things I know about Cohen, because she and I used to discuss the historical context of her origins as an astronomer back when I was a graduate student.

Another highlight of the day was a discussion with Kim Venn (Victoria) and Matt Shetrone (Texas) about persistence effects that damage a significant fraction of spectra in a significant fraction of APOGEE exposures. We discussed the trade-offs between correction and avoidance, and what it might take to fix the problem.

Over dinner, I and others delivered tributes to Judy Cohen. She really has had an amazing scientific impact, and also been a wonderful person, and had a big influence on me. She also said nice things about me in her own speech!

2017-02-01

#JudyFest, day 1

Today was the first day of the meeting The Galactic Renaissance, a meeting in honor of Judy Cohen (Caltech), who was one of my (three) PhD advisors (with Blandford and Neugebauer). On the plane to the meeting I built a brand-new talk about data-driven models of stars, bringing in stuff we are doing in HARPS and Gaia and connecting it to what we are doing with The Cannon.

One highlight of the meeting was Steve Shechtman (OCIW) talking about a new infrared multi-object spectrograph he is designing for Magellan. He talked about some interesting instrument design considerations, which was fitting, because Judy Cohen built (with Bev Oke and a great team) the most highly used instrument on the Keck Telescopes (the LRIS spectrograph, which I used in my PhD work). One point is that all spectrographs are fundamentally trade-offs between spatial and spectral extent, because the total number of pixels is limited. He noted that the spectrograph cost and weight is a strong function of the diameter of the collimated beam, which is simultaneously obvious and non-trivial. Finally, he noted that putting an imaging mode into a multi-object spectrograph substantially increases the cost and complexity: It requires that there not be chromatic optics, which imagers hate but spectrographs don't mind at all!

Another highlight was a talk about Solar twins by Jorge Melendez (São Paulo). By using carefully chosen twins, he can measure abundances better than 0.01 dex. He showed some great data. But even more absurd he is looking at binary stars, both members of which are themselves solar twins! But then if that isn't absurd enough, he also has binary stars, both members of which are themselves solar twins, and one of which has an exoplanet! Awesome. He mentioned that [Y/Mg] is a (possibly complicated) age indicator, which is relevant to things Ruth Angus (Columbia) and I have been thinking about.

2017-01-31

fail fast!

It was a low-research day! The most productive moment came early in the morning, when I had a great discussion with Boris Leistedt (NYU) and Adrian Price-Whelan (Princeton) about the structure of my group meetings at CCA. We need to change them; we aren't hearing enough from young people, and we aren't checking in enough on projects in progress. They agreed to lead a discussion of this in both group meetings tomorrow, and to make format changes, implemented immediately. I have to learn that failing to do things right is only bad if we don't learn from it and try to do better.

Right after this conversation, Price-Whelan and I got in a short discussion about making kinematic age estimates for stars, using widely-separated, nearly co-moving pairs. I hypothesized that for any real co-moving pair, the separation event (the spatial position and time at which they were last co-spatial), will be better estimated than either the velocity or the separation, given (say) Gaia data.

2017-01-30

precise ages from imprecise indicators; QCD dark matter

In the morning, I met with Ruth Angus (Columbia) to discuss the ages of stars. We brainstormed all possible age estimates for stars, and listed some limitations and epistemological properties. In addition to the usuals (rotation, activity, isochrone fitting, asteroseismology, and so on), we came up with some amusing options.

For example, the age of the Universe is a (crude, simple, very stringent) age estimate for every single star, no matter what. It is a very low-precision estimate, but it is unassailable (at the present day). Another odd one is the separation of comoving pairs. In prinicple every co-moving pair provides an age estimate given the relative velocity and relative position, with the proviso that the stars might not be co-eval. This is a good age estimate except when it isn't, and we only have probabilistic information about when it isn't.

We then wrote down the basic idea for a project to build up a hierarchical model of all stellar ages, where each star gets a latent true age, and every age indicator gets latent free parameters (if there any). Then we use stars that overlap multiple age indicators to simultaneously infer all free parameters and all ages. The hope—and this is a theme I would like to thread throughout all my research—is that many bad age indicators (and they are all bad for different reasons) will, when combined, produce precise age estimates nonetheless for many stars.

At lunch-time, Glennys Farrar (NYU) gave an energizing black-board talk about a dark-matter candidate that exists within QCD, made up of a highly symmetric state of six quarks. QCD is a brutal theory, so it is hard to compute the properties of this state, or its stability, but Farrar laid out some of the conditions under which it is a viable dark-matter candidate. It is very interesting phenomenologically if it exists, because it has a non-trivial cross-section for scattering off of atomic nuclei, and it could be involved in baryogenesis or the matter–anti-matter asymmetry.

2017-01-27

#AAAC, day 2

[This is the 12th birthday of this blog, and something like the 2814th post. I remain astonished that anyone reads this blog; surely it qualifies as one of the least interesting sites on the internet.]

I spent today again inside NSF headquarters. It was a good day, because most of our session was pure unstructured discussion of the issues—not presentations from anyone—in open session. All of the AAAC sessions are completely open, with an agenda and a call-in number open to literally anyone on the planet. This openness was also part of our discussion, because we got some discussion in on the opaque process by which the Decadal Survey (which is so damned important) is executed and also staffed. As part of this I published the non-disclosure agreement that the National Academy of Sciences asks people to sign if they are going to participate. It is way too strong, I think.

We also talked about many other interesting priorities and issues for our report. One is that the America Competes Act explicitly refers to the astrophysics decadal process as an exemplar for research funding priority-setting in the US government. Another is that the freedom of scientists in government agencies to clearly and openly communicate without executive-branch interference is absolutely essential to everything we do. Another is that the current (formalized, open) discussion about CMB Stage-4 experiments is an absolutely great example of inter-agency and and inter-institutional and cross-rivalry cooperation that will lead to a very strong proposal for the agencies, the community, and the Decadal Survey.

One very important point, which also came up at #AAS229, is that if we are going to make good, specific, actionable recommendations to the Decadal Survey about the state of the profession, about software, or about funding structures, we need to gather data now. These data are hard to gather; there are design and execution issues all over; let's have that conversation right now.

2017-01-26

#AAAC, day 1

Today was the first day of an Astronomy and Astrophysics Advisory Committee meeting at NSF headquarters in Washington, DC. We had presentations from the agencies for most of the day. Random things that I learned that interest me follow in this blog post. Our meetings are open, by the way.

NSF is trying to divest from facilities in ways that keep them running by other partners, so even though they may go public, they will at least stay part of the community. In particular, they are working to offload Aricebo to a combination of NASA and private partners.

NASA has taken its ATP theory call down to once every two years, but not reduced funding. They hope this will increase the amount of funding per submitted proposal, and the early data seems like it might. NASA and NSF have started a joint funding program called TCAN for computational methods in astrophysics. That might affect me! NASA re-balanced its fellowship postdocs, in response to concerns about pipeline, long-term trends in their own funding portfolio, and the rise in private fellowships. This is debatable and controversial, though they did not enter into this decision-making lightly. What is not controversial is that they have combined all the fellowships into a common application process, substantially reducing the workload on applicants and referees.

There is an extremely big and serious CMB S-4 process going on, in which many traditionally rivaling scientific groups are cooperating to find consensus around what to build or do next. That's very healthy for the field, I think, and will create a very strong set of ideas for the next Decadal Survey to discuss. Decadal is on the agenda for tomorrow!

Towards the end of the day, Paul Hertz (NASA) and I got into a fight about Deep Space Network. I fear that I might be wrong here; I can't really claim to understand that stuff better than Hertz!

2017-01-25

stellar mergers and oscillations; cosmological dictionaries

Today was group-meeting day. In stars group meeting, Matteo Cantiello (CCA) discussed recent results on star–star interactions, including a star–star merger that may have been caught by the OGLE experiment. He gave us some order-of-magnitude thinking about the common-envelope phase and how we might use these events to understand stars. He was pessimistic about being able to do full simulations of the events; there are too many things happening at too many scales. He also showed us another tight binary system which shows period changes that suggest a merger in 2022.

Dan Foreman-Mackey (UW) spoke about linear algebra and asteroseismology. With Eric Agol (UW) he has developed linear algebra techniques such that he can solve matrix equations in linear time (and also take the determinant, which is super-important), provided that the matrix is a kernel matrix of a certain (very flexible) form. This form is capable of modeling a star's light curve as a mixture of stochastically driven oscillators. This raises the hope of automatically getting asteroseismic parameters for all TESS stars! In the discussion, we arrived at the idea of using Kepler to measure the three-point function for stellar variability. David Spergel (Flatiron) predicted that it would lead to constraints on mode coupling and other aspects of stellar physics.

Cosmology group meeting was crashed by Daniel Mortlock (Imperial) and Hiranya Peiris (UCL). Mortlock told us that there are still very high-redshift quasars being discovered, but that he still has the redshift record, and that, given Eddington time-scales, his is still the most extreme high-redshift quasar. This was followed by a wide-ranging discussion (led by Elijah Visbal, Flatiron) of the possibility that we could be using generative models or better estimators than two-point functions in 21-cm surveys designed to discover the physics of reionization of the Universe. Peiris brought up dictionary methods and we spent time discussing these, and the possibility that we could learn sparse dictionaries on simulations and use them on data. It was very vague, but gives me ideas about where we at CCA need to learn more about methodologies.

2017-01-24

gaussian-process stellar spectrum

Today was hack day with Ruth Angus (Columbia), Megan Bedell (Chicago), Dan Foreman-Mackey (UW), and I all working on various things in parallel at NYU. Bedell and Foreman-Mackey got the Gaussian-Process stellar spectroscopy model working for Bedell's HARPS data, and it is blazingly fast. It is much faster than the code Bedell and I wrote that does a dense spline. The fastness comes from magic that Eric Agol (UW) and Foreman-Mackey are making happen for GP kernels of a particular (very flexible) form. We made various pragmatic decisions in this project today, like to work in log flux (rather than flux) and to optimize an error model along with the other hyper-parameters. These all look like good decisions in the short run.

2017-01-23

GP stellar spectrum, explosions

Megan Bedell (Chicago) and Dan Foreman-Mackey (UW) came into town for a few days of hacking on stellar spectra. We had long discussions about the point and scope of our project, and made plans for the week. Foreman-Mackey argued that we should switch over to a Gaussian-Process model for the stellar spectrum. That seems sensible, in part because he has the fastest code in the world for that. He didn't object to our “fit and subtract” approach to looking for stellar variability in the spectral domain: As Andrew Gelman (Columbia) teaches us, inspecting residuals is how you make choices for model extension, improvement, and elaboration.

After lunch, Maryam Modjaz (NYU) gave a great, wide-ranging talk about her work on supernovae, supernova progenitors, chemical abundances, and the supernova–GRB connection. As I have commented here before, I think her results—which show that broad-line type-2c supernovae with and without associated gamma-ray burst live in different kinds of environments puts strong pressure on any model of GRB beaming. I also learned in her talk that there are new classes of transients that are brighter than classical novae and fainter than supernovae that are currently unexplained.

2017-01-20

stars tracing dark matter; unresolved stars

I had lunch with Mariangela Lisanti (Princeton), where we talked about seeing the dark matter in the Milky Way using stars and stellar dynamics. One simple thing we discussed is the following: To what extent do extremely old stars in the Halo trace the dark matter? There are good theoretical reasons that they should be close, but also good theoretical reasons that they should not be perfect tracers. Interesting whether it would be possible to get a very accurate view of the dark matter distribution in space just by looking at the stellar positions for some carefully chosen set of stars.

After this I went through the talk slides MJ Vakili (NYU) has prepared for Berkeley next week. He has a great set of results, and an impressive talk. I also discussed an ancillary science proposal for APOGEE with Gail Zasowski (STScI): We want to look in M31 for the chemical abundance trends (with kinematics and galactocentric radius) that we see in the Milky Way by taking APOGEE spectra and then deconvolving (modeling) them as a linear superposition of stars with different chemistry and kinematics. That would be living the dream!

2017-01-19

tidal disruptions are not trivial

Today there was a great, educational visit by James Guillochon (CfA) to the Flatiron Institute. Guillochon led a discussion about tidal disruption events (stars disrupted by black holes) and the transient phenomena they should make. He (perhaps unintentionally?) sowed doubt in my mind that the things currently classified as TDEs are in fact TDEs: There ought to be a huge range of phenomena, depending on the star, the black hole, and orbits. He gave a beautiful answer to my question about seeing the star brighten simply because of the tidal distortion (which is immense: the star stretches out to thousands of AU in length): He predicted that there should be a rapid recombination. That, of course, made me think that we should look for H-alpha or Lyman-alpha flashes! After this discussion, he and I discussed his work assembling all (and really all) published data about supernovae (photometry and spectroscopy), which is a scraping project of immense scope.

2017-01-18

blind analysis; hierarchical models

In stars group meeting, we discussed two hierarchical models of the Gaia TGAS data. Keith Hawkins (Columbia) is building a model of the red clump stars; he finds that if he selects red-clump stars carefully, they are very good standard candles. His hierarchical model determines this and also de-noises the parallaxes for them. Boris Leistedt (NYU) went even further and deconvolved the full color-magnitude diagram, though with a baroque hierarchical model that includes a highly parameterized model for the density of stars in color–magnitude space.

In cosmology group meeting, a lot happened, with Josh Speagle (CfA) talking about next-generation photometric redshifts, Mike Blanton (NYU) talking about the huge NSF proposal we are putting in at NYU around physics and data science, and David Spergel (Flatiron), Blanton, and me arguing about blind analyses. The latter subject is rich with issues. We want (and need) exploratory data analysis, but we also want (and need) secure statistical results without p-hacking, forking paths, and so on. There was disagreement among the group, but I argued that you can have it all if you design right. There are interesting conflicts with open science there.

I also met with Francisco Villaescusa (Flatiron) to talk about work on neutrinos in large-scale structure. He promised me some papers to read on the subject.

2017-01-17

Ohio State University

I spent the day at Ohio State University today, visiting the Physics and Astronomy Departments. I had a great time! Too many things happened to mention them all, but here are some highlights:

At the arXiv coffee discussion, among other results we discussed was a new paper by Radek Poleski and colleagues in which they identify hundreds of thousands of variable stars in the OGLE data set. Poleski showed some examples, which included eclipsing binaries in which both stars are elongated tidally (and it is obvious in the light curves), transiting exoplanets, transiting exoplanets where the transits come and go like there is rapid precession, periodic variables, periodic variables with second derivatives in the period (hence possibly accelerations) and so on. He claims that every single one of the variables was checked by hand.

In the stars group meeting run by Jennifer Johnson, we discussed mainly stellar rotation, and how it connects to age and stellar evolution. Johnson runs the group meeting such that each participant brings a figure (if they can) and that figure is discussed and improved. That's a good idea. Also in that meeting we discussed what to do if your result is scooped!

In the galaxies group meeting towards the end of the day, we argued about the dimensionality of chemical-abundance space, both in theory and in the observations, with me arguing that obviously the observational space is higher dimensionality than any theory space. But David Weinberg challenged me, and forced me to sharpen my arguments, and also make better plans for whatever paper I am going to write about this!

2017-01-16

reporting

I spent the day writing reports for the National Science Foundation on my grants. Does this count as research? I guess it does, in the long run! The break in the day was a long lunch with Boris Leistedt (NYU) in which we discussed his priorities for research in the near term, and also his paper on an empirical model of stars from the Gaia data.

2017-01-13

photometric redshifts at low redshift!

Marla Geha (Yale) came in to town today to ask Boris Leistedt (NYU) and I whether our new photometric redshift method could be used at very low redshifts. In general, low redshifts are difficult because even a large fractional change in the distance creates only a small change in the colors when the redshift is small. Geha (with Wechsler and others) has a sample of possible (intrinsically) faint satellites of low-redshift galaxies and they would like to improve the efficiency of their spectroscopic follow-up. A great project for a great tool!

2017-01-12

viz

In my research time today, I pair-coded with Lauren Anderson (CCA) a visualization of a two-d mixture of Gaussians. This involves a little linear algebra.

2017-01-11

so many things (I love Wednesdays)

In the stars group meeting at CCA, there was huge attendance today. David Spergel (CCA) opened by giving a sense of the WFIRST GO and GI discussion that will happen this week at CCA. The GI program is interesting: It is like an archival program within WFIRST. This announcement quickly ran into an operational discussion about what WFIRST can do to avoid saturation of bright stars.

Katia Cunha (Observatorio Nacional, Brazil) spoke about two topics in APOGEE. The first is that they have found new elements in the spectra! They did this by looking at the spectra of s-process-enhanced stars (metal-poor ones) and finding strong, unidentified lines. This is exciting, because before this, APOGEE has no measurements of the s process. The second topic is that they are starting to get working M-dwarf models, which is a first, and can measure 13 element abundances in M dwarfs. Verne Smith (NOAO) noted that this is very important for the future use of these spectrographs and exoplanet science in the age of TESS. On this latter point, the huge breakthrough was in improvements to the molecular line lists.

Dave Bennett (GSFC) talked to us about observations of the Bulge with K2 and other instruments to do microlensing, microlensing parallax, and exoplanet discovery. He noted that there isn't a huge difference between doing characterization and doing search: The photometry has to be good to find microlensing events and not be fooled by false positives. He is in NYC this week working with Dun Wang (NYU).

Jeffrey Carlin (NOAO) led a discussion of detailed abundances for Sagittarius-stream stars as obtained with a CFHT spectrograph fiber-fed from Gemini N. These abundances might unravel the stream for us, and inform dynamical models. This morphed into a conversation about why the stellar atmosphere models are so problematic, which we didn't resolve (surprised?). I pitched a project in which we use Carlin's data at high resolution to train a model for the LAMOST data, as per Anna Y. Q. Ho (Caltech), and then do science with tens of thousands of stars.

In the cosmology group meeting, we discussed the possibility of evaluating (directly) the likelihood for a CMB map or time-ordered data given the C-ells and a noise model. As my loyal reader knows, this requires not just performing solve (inverse multiplication) operations but also (importantly) determinant evaluations. For the discussion, mathematicians Mike O'Neil (NYU) and Leslie Greengard (CCA) and Charlie Epstein (Penn) joined us, with Mike O’Neil leading the discussion about how we might achieve this, computationally. O’Neil outlined two strategies, one of which takes advantage of a possible HODLR form (Ambikasaran et al), another of which takes advantage of the spherical-harmonics transform. There was some disagreement about whether the likelihood function is worth computing, with Hogg on one end (guess which) and Naess and Hill and Spergel more skeptical. Spergel noted that if we could evaluate the LF for the CMB, it opens up the possibility of doing it for LSS or intensity mapping in a three-dimensional (thick) spherical shell (think: redshift distortions and fingers of god and so on).

Between meetings, I discussed deconvolutions of the TGAS color-magnitude diagram with Leistedt and Anderson, and low-hanging fruit in the comoving-star world with Oh and Price-Whelan.

2017-01-10

unsupervised models of stars

I am very excited these days about the data-driven model of stellar spectra that Megan Bedell (Chicago) and I are building. In its current form, all it does is fit multi-epoch spectra of a single star with three sets of parameters, a normalization level (one per epoch) times a wavelength-by-wavelength spectral model (one parameter per model wavelength) shifted by a Doppler Shift (one per epoch). This very straightforward technology appears to be fitting the spectra to something close to the photon noise limit (which blows me away). The places where it doesn't fit appear to be interesting. Some of them are telluric absorption residuals, and some are intrinsic variations in the lines in the stellar spectra that are sensitive to activity and convection.

Today we talked about scaling this all up; right now we can only do a small part of the spectrum at a time (and we have a few hundred thousand spectral pixels!). We also spoke about how to regress the residuals against velocity or activity. The current plan is to investigate the residuals, but of course if we find anything we should add it in to the generative model and re-start.

2017-01-09

conversations

Not much research today, but I did have conversations with Lauren Anderson (Flatiron) about deconvolving the observed (by Gaia TGAS and APASS) color-magnitude diagram of stars, with Leslie Greengard (Flatiron) and Alex Barnett (Dartmouth) about cross-over activities between CCA and CCB at Flatiron, and with Kyle Cranmer (NYU) about his immense NSF proposal.

2017-01-07

#hackAAS at #aas229

Today was the (fifth, maybe?) AAS Hack Day; it was also the fifth day of #aas229. As always, I had a great time and great things happened. I won't use this post to list everything from the wrap-up session, but here are some personal, biased highlights:

Inclusive astronomy database
Hlozek, Gidders, Bridge, and Law worked together to create a database and web front-end for resources that astronomers can read (or use) about inclusion and astronomy, inspired in part by things said earlier at #aas229 about race and astronomy. Their system is just a prototype, but it has a few things in it and it is designed to help you find resources but also add resources.
Policy letter help tool
Brett Morris led a hack that created a web interface into which you can input a letter you would like to write to your representative about an issue. It searches for words that are bad to use in policy discussions and asks you to change them, and also gives you the names and addresses of the people to whom you should send it! It was just a prototype, because it turns out there is no way right now to automatically obtain representative names and contact information. That was a frustrating finding about the state of #opengov.
Budget planetarium how-to
Ellie Schwab and a substantial crew got together a budget and resources for building a low-buck but fully functional planetarium. One component was WWT, which is now open source.
Differential equations
Horvat and Galvez worked on solving differential equations using basis functions, to learn (and re-learn) methods that might be applicable to new kinds of models of stars. They built some notebooks that demonstrate that you can easily solve differential equations very accurately with basis functions, but that if you choose a bad basis, you get bad answers!
K2 and the sky
Stephanie Douglas made an interface to the K2 data that show a postage stamp from the data, the light curve, and then aligned (overlaid, even) imaging from other imaging surveys. This involved figuring out some stuff about K2's world coordinate systems, and making it work for the world.
Poster clothing
Once again, the sewing machines were out! I actually own one of these now, just for hack day. Pagnotta led a very successful sewing and knitting crew. Six of the team used a sewing machine for the first time today! In case you are still stuck in 2013: The material for sewing is the posters, which all the cool kids have printed on fabric, not paper these days!
Meta-hack
Erik Tollerud built some tools for the long-term storage and archiving of #hackAAS hacks. These leverage GitHub under the hood.

There were many other hacks, including people learning how to use testing and integration tools, people learning to use the ADS API, people learning how to use version control and GitHub, testing of different kinds of photometry, and visualization of various kinds of data. It was a great day, and I can't wait for next year.

Huge thanks to our corporate sponsor, Northrop Grumman, and my co-organizers Kelle Cruz, Meg Schwamb, and Abigail Stevens. NG provided great food, and Schwamb did a great job helping everyone in the room understand the (constructive, open, friendly, fun) point of the day.

2017-01-06

#aas229, day 4

I arrived at the American Astronomical Meeting this morning, just in time (well a few minutes late, actually) for the Special Session on Software organized by Alice Allen (ASCL). There were talks about a range of issues in writing, publishing, and maintaining software in astrophysics. I spoke about software publications (slides here) and software citations. Not only were the ideas in the session diverse, the presenters had a wide range of backgrounds (three of them aren't even astronomers)!

There were many interesting contributions to the session. I was most impressed with the data that people are starting to collect about how software is built, supported, discovered, and used. Along those lines, Iva Momcheva (STScI) showed some great data she took about how software projects are funded and built. This follows great work she did with Erik Tollerud (STScI) on how software is used by astronomers (paper here). In their new work, they find that most software is funded by grants that are not primarily (or in many cases not even secondarily) related to the software, and that most software is written by early-career scientists. These data have great implications for the next decade of astrophysics funding and planning. In the discussion afterwards, there were comments about how hard it is to fund the maintenance of software (something I feel keenly).

Similarly, Mike Hucka (Caltech) showed great results he has on how scientists discover software for use in their research projects (paper here). He finds (surprise!) that documentation is key, but there are many other contributing factors to make a piece of research software more likely to be used or re-used by others. His results have strong implications for developers finishing software projects. One surprising thing is that scientists are less platform-specific or language-specific in their needs than you might think.

I spent part of the afternoon hiding in various locations around the meeting, hacking on an unsupervised data-driven model of stellar spectra with Megan Bedell (Chicago).

2017-01-05

making slides

My only real research accomplishment today was to make slides for my AAS talk on software publications, which is for a special session organized by Alice Allen (ASCL). The slides are available here.

2017-01-04

carbon stars, regulation of star formation, and so much more

Rix called me to discuss the problem that when we compare the chemical abundances in pairs of stars, we get stars that are more identical than we expect, given our noise model for chemical abundances. That is, we see things with chi-squared (far) less than the number of elements. This means (I think) that our noise estimation is overly conservative: There are (at least some) stars that we are observing at very good precision. Further evidence for my view is that there are more such (very close) pairs within open clusters than across open clusters (or in the field).

In stars group meeting, Jill Knapp (Princeton) spoke about Carbon stars (stars with more carbon than oxygen, and I really mean more in counts of atoms). She discussed dredge-up and accretion origins for these, and how we might distinguish these. She has some results on the abundance of Carbon stars as a function of expected (from stellar models) surface-convection properties, which suggest accretion origins. But it is early days.

Chang-Goo Kim (Princeton) told us about simulations that are designed to understand the regulation of star formation in galaxy disks (kpc scales). He pointed out the importance of gravity in setting the star-formation rate; these arguments are always reminiscent (to me) of the Eddington argument. His simulations include supernovae feedback in the form of mechanical and radiation energy, and magnetic turbulence and cosmic ray pressure. He emphasized that conclusions about feedback-regulated star formation depend strongly on assumptions about spatial correlations and locations (think escape over time) of the supernovae relative to the dense molecular cloud in which the star formation occurs. Fundamentally the thing that sets the star-formation rate is the pressure, which can be hydrostatic or turbulent or both.

Semyeong Oh (Princeton) and I led a discussion on the lowest-hanging fruit for projects that exploit her comoving star (and group) catalog from TGAS. Some of the lowest-hanging include investigations of the locations of the pairs in phase space, to look at heating, age, and formation mechanisms.

2017-01-03

deconvolution of labels

Lauren Anderson (CCA) and I discussed the state of our project to put spectroscopic parameters onto photometrically discovered stars using colors and magnitudes from APASS, parallaxes from Gaia TGAS, and spectroscopic parameters from the RAVE-on Catalog. We want to take the nearby neighbors in color-magnitude space and deconvolve their noisy spectroscopic parameters to make a less noisy estimate for (what you might call) the test objects. We have been using extreme deconvolution (Bovy et al.) for this, deconvolving the labels for the nearest neighbors (weighted by a likelihood). That is, find neighbors first, deconvolve second. After hours staring at the white board, we decided that maybe we should just deconvolve all the inputs up front, and do inference under the prior created by that deconvolution. Question: Is this computationally feasible?