2017-06-15

Bayesian basics; red clump

A research highlight today was the first meeting of our Bayesian Data Analysis, 3ed reading group. It lasted a lot longer than an hour! We ended up going off into a tangent on the Fully Marginalized Likelihood vs cross-validation and Bayesian equivalents. We came up with some possible research projects there! The rest of the meeting was Bayesian basics. We decided on some problems we would do in Chapter 2. I hate to admit that the idea of having a problem set to do makes me nervous!

In the afternoon, Lauren Anderson (Flatiron) and I discussed our project to separate red-clump stars from red-giant-branch stars in the spectral domain. We have two approaches: The first is unsupervised: Can we see two spectral populations where the RC and RGB overlap? The second is supervised: Can we predict relevant asteroseismic parameters ina training set using the spectra?

2017-06-14

cryo-electron-microscopy biases

At the Stars group meeting, I proposed a new approach for asteroseismology, that could work for TESS. My approach depends on the modes being (effectively) coherent, which is only true for short survey durations, where “short” can still mean years. Also, Mike Blanton (NYU) gave us an update on the APOGEE-S spectrograph, being commissioned now at LCO in Chile. Everything is nominal, which bodes very well for SDSS-IV and is great for AS-4. David Weinberg (OSU) showed up and told us about chemical-abundance constraints on a combination of yields and gas-recycling fractions.

In the afternoon I missed Cosmology group meeting, because of an intense discussion about marginalization (in the context of cryo-EM) with Leslie Greengard (Flatiron) and Marina Spivak (Flatiron). In the conversation, Charlie Epstein (Penn) came up with a very simple argument that is highly relevant. Imagine you have many observations of the function f(x), but for each one your x value has had noise applied. If you take as your estimate of the true f(x) the empirical mean of your observations, the bias you get will be (for small scatter in x) proportional to the variance in x times the second derivative of f. That's a useful and intuitive argument for why you have to marginalize.

2017-06-13

Renaissance

I spent the day at Renaissance Technologies, where I gave an academic seminar. Renaissance is a hedge fund that created the wealth of the Simons Foundation among many other Foundations. I have many old friends there; there are many PhD astrophysicists there, including two (Kundić and Metzger) I overlapped with back when I was a graduate student at Caltech. I learned a huge amount while I was there, about how they handle data, how they decide what data to keep and why, how they manage and update strategies, and what kinds of markets they work in. Just like in astrophysics, the most interesting signals are at low signal-to-noise in the data! Appropriately, I spoke about finding exoplanets in the Kepler data. There are many connections between data-driven astrophysics and contemporary finance.

2017-06-12

reading the basics

Today we decided that the newly-christened Astronomical Data Group at Flatiron will start a reading group in methods. Partially because of the words of David Blei (Columbia) a few weeks ago, we decided to start with BDA3, part 1. We will do two chapters a week, and also meet twice a week to discuss them. I haven't done this in a long time, but we realized that it will help our research to do more basic reading.

This week, Maggie Lieu (ESA) is visiting Justin Alsing (Flatiron) to work (in part) on Euclid imaging analysis. We spent some time discussing how we might build a training set for cosmic rays, asteroids, and other time-variable phenomena in imaging, in order to train some kind of model. We discussed the complications of making a ground-truth data set out of existing imaging. Next up: Look at what's in the HST Archive.

2017-06-11

summer plans

I worked for Hans-Walter Rix (MPIA) this weekend: I worked through parts of the After Sloan 4 proposal to the Sloan Foundation, especially the parts about surveying the Milky Way densely with infrared spectra of stars. I also had long conversations with Rix about our research plans for the summer. We have projects to do, and a Gaia Sprint to run!

2017-06-08

music and stars

First thing, I met with Schiminovich (Columbia), Mohammed (Columbia), and Dun Wang (NYU) to discuss our GALEX imaging projects. We decided that it is time for us to produce titles, abstracts, outlines, and lists of figures for our next two papers. We also realized that we need to produce pretty-picture maps of the plane survey data, and compare it to Planck and GLIMPSE and other related projects.

I had a great lunch meeting with Brian McFee (NYU) to catch up on his research (on music!) and ask his advice on various time-domain projects I have in mind. He has new systems to recognize chords in music, and he claims higher performance than previous work. We discussed time-series methods, including auto-encoders and HMMs. As my loyal reader knows, I much prefer methods that deal with the data probabilistically; that is, not methods that always require complete data without missing information, and so on. McFee had various thoughts on how we might adapt methods that expect complete data for tasks that are given incomplete data, like tasks that involve Kepler light curves.

2017-06-07

post-main-sequence stellar evolution

At Stars group meeting, Matteo Cantiello (Flatiron) had us install MESA and then gave us a tutorial on aspects of post-main-sequence evolution of stars. There were many amazing and useful things, and he cleared up some misconceptions I had about energy production and luminosity during the main-sequence and red-giant phases of stellar evolution. He showed some hope (because of convective-region structure, which in turn depends on opacity, which in turn depends on chemical abundances) that we might be able to measure some aspects of chemical abundances with asteroseismology in certain stellar types.

In the Cosmology group meeting, we discussed many topics, but once again I got fired up about automated methods or exhaustive methods of searching for (and analyzing) estimators, both for making measurements in cosmology, and for looking for anomalies in a controlled way (controlled in the multiple-hypothesis sense).
One target is the neutrino mass, which is in the large-scale structure, but subtly.

In the space between meetings, Daniela Huppenkothen (NYU) and I worked with Chris Ick (NYU) to get him started building a mean model of Solar flares, and looking at the power spectrum of the flares and their mean models. The idea is to head towards quantitative testing of quasi-periodic oscillation models.

2017-06-06

don't apply the Lutz-Kelker correction!

One great research moment today was Stephen Feeney (Flatiron) walking into my office to ask me about the Lutz–Kelker correction. This is a correction applied to parallax measurements to account for the point that there are far, far more stars at lower parallaxes (larger distances) than there are at smaller parallaxes. Because of (what I think of as being) Jacobian factors, the effect is stronger in parallax than it is in distance. The LK correction corrects for what—in luminosity space—is sometimes called Eddington bias (and often wrongly called Malmquist bias). Feeney's question was: Should he be applying this LK correction in his huge graphical model for the distance ladder? And, implicitly, should the supernova cosmology teams have applied it in their papers?

The short answer is: No. It is almost never appropriate to apply the LK correction to a parallax. The correction converts a likelihood description (the likelihood mode, the data) into a posterior description (the posterior mode) under an improper prior. Leaving aside all the issues with the wrongness of the prior, this correction is bad to make because in any inference using parallaxes, you want the likelihood information from the parallax-measuring experiment. If you use the LK-corrected parallax in your inference, you are multiplying in the LK prior and whatever prior you are using in your own inference, which is inconsistent, and wrong!

I suspect that if we follow this line of argument down, we will discover mistakes in the distance-ladder Hubble-constant projects! For this reason, I insisted that we start writing a short note about this.

Historical note: I have a paper with Ed Turner (Princeton) from the late 90s that I now consider totally wrong, about the flux-measurement equivalent of the Lutz-Kelker correction. It is wrong in part because it uses wrong terminology about likelihood and prior. It is wrong in part because there is literally a typo that makes one of the equations wrong. And it is wrong in part because it (effectively) suggests making a correction that one should (almost) never make!

2017-06-05

buying and selling correct information

Well of course Adrian Price-Whelan (Princeton) had lots of comments on the paper, so Lauren Anderson (Flatiron) and I spent the day working on them. So close now!

I had lunch with Bruce Knuteson (Kn-X). We talked about many things but including the knowledge exchange that Kn-X runs: The idea is to make it possible to buy and sell correct information, even from untrusted or anonymous sources. The idea is that the purchase only goes through if the information turns out to be true (or true-equivalent, like useful). It has lots of implications for news, but also for science, in principle. He asked me how we get knowledge from others in astronomy? My answer: Twitter (tm)!

Late in the day, Dan Foreman-Mackey (UW) and I had a long discussion about many topics, but especially possible events or workshops we might run next academic year at the Flatiron Institute. One is about likelihood-free or ABC or implicit inference. Many people in CCA and CCB are interested in these subjects, and Foreman-Mackey is thinking about expanding in this direction. Another is about extreme-precision radial velocity measurements, where models of confusing stellar motions and better methods in the pipelines might both have big impacts. Another is about photometry methods for the TESS satellite, which launches next year. We also discussed the issue that it is important, when we organize any workshop, to make it possible to discover all the talent out there that we don't already know about: That talent we don't know about will increase workshop diversity, and increase the amount we ourselves learn.

2017-06-02

oscillation-timing exoplanet discovery

First thing, Ruth Angus (Columbia) and I discussed an old, abandoned project of mine to find exoplanets by looking at timing residuals (as it were) on high-quality (like nearly coherent) oscillating stars. It is an old idea, best executed so far (to my knowledge; am I missing anything?) by Simon Murphy (Sydney). I have ideas for improvements; they involve modeling the phase shift as a continuous function, not binning and averaging phase shifts (which is the standard operating procedure). It uses results from the Bayesian time-series world to build a likelihood function (or maybe a pseudo-likelihood function). One of the things I like about my approach is that it could be used on pulsar timing too.

For the rest of the day, Lauren Anderson (Flatiron) and I did a full-day paper sprint on her Gaia TGAS color-magnitude diagram and parallax de-noising paper. We finished! We decided to give Price-Whelan a weekend to give it a careful once-over and submit on Monday.

2017-06-01

Simons

It was a low-research day. But I did learn a lot about the Simons Foundation, in a set of meetings that introduce new employees to the activities and vision of the Foundation.

2017-05-31

variational inference

Today was a great day of group meetings! At the stars group meeting, Stephen Feeney (Flatiron) showed us the Student t distribution, and showed how it can be used in a likelihood function (and with one additional parameter) to capture un-modeled outliers. Semyeong Oh (Princeton) updated us on the pair of stars she has found with identical space velocities but very different chemical abundances. And Joel Zinn (OSU) told us about new approaches to determining stellar parameters from light curves. This is something we discuss a lot at Camp Hogg,
so it is nice to see some progress!

We had the great idea to invite David Blei (Columbia) and Rajesh Ranganath (Princeton) to the Cosmology group meeting today. It was great! After long introductions around the (full) room, we gave the floor to Blei, who chose to tell us about the current landscape of variational methods for inference in large models with large data. His group has been doing lots there. The discussion he led also ranged over a great, wide range of things, including fundamental Bayesian basics, problem structure, and methods for deciding which range of inference methodologies might apply to your specific problem. The discussion was lively, and the whole event was another reminder that getting methodologists and astronomers into the same room is often game-changing. We have identified several projects to discuss more in depth for a possible collaboration.

[With this post, this blog just passed 211.5 posts. I realize that a fractional power of two is not that impressive, but it is going to be a long time to 212 and I'll be lucky to ever publish post number 213!]

2017-05-30

interdisciplinary inference meetings

Justin Alsing (Flatiron) organized an interdisciplinary meeting at Flatiron across astrophysics, biology, and computing, to discuss topics of mutual interest in inference or inverse problems. Most of the meeting was spent with us going around the room describing what kinds of problems we work on so as to find commonalities. Some interesting ideas: The neuroscientists said that not only do they have data analysis problems, they also want to understand how brains analyze data! Are there relationships there? Many people in the room from both biology and astronomy are in the “likelihood-free” regime: Lots of simulations, lots of data, no way to compare! That will become a theme, I predict. Many came to learn new techniques, and many came to learn what others are doing, so that suggests a format, going forward, in which we do a mix of tutorials, problem statements, and demonstrations of results. We kicked it off with Lauren Anderson (Flatiron) describing parallaxes and photometric parallaxes. [If you are in the NYC area and want to join us for future meetings, drop me a line.]

2017-05-27

measuring the velocity of a star

Yesterday and today I wrote code. This is a much rarer activity than I would like! I wrote code to test different methods for measuring the centroid of an absorption line in a stellar spectrum, with applications to extreme precision radial-velocity experiments. After some crazy starts and stops, I was able to strongly confirm my strong expectation: Cross-correlation with a realistic template is far better for measuring radial velocities than cross-correlation with a bad template (especially a binary mask). I am working out the full complement of experiments I want to do. I am convinced that there is a (very boring) paper to be written.

2017-05-25

what is math? interpolation of imaging

The research highlight of the day was a long call with Dustin Lang (Toronto) to discuss about interpolation, centroiding, and (crazily) lexicographic ordering. The latter is part of a project I want to do to understand how to search in a controlled way for useful statistics or informative anomalies in cosmological data. He found it amusing that my request of mathematicians for a lexicographic ordering of statistical operations was met with the reaction “that's not math, that's philosophy”.

On centroiding and interpolation: It looks like Lang is finding (perhaps not surprisingly) that standard interpolators (the much-used approximations to sinc-interpolation) in astronomy very slightly distort the point-spread function in imaging, and that distortion is a function of sub-pixel shift. He is working on making better interpolators, but both he and I are concerned about reinventing wheels. Some of the things he is worried about will affect spectroscopy as well as imaging, and, since EPRV projects are trying to do things at the 1/1000 pixel level, it might really, really matter.

2017-05-24

chemical correlates of planet system architecture

At Stars group meeting, Jo Bovy (Toronto) demonstrated to us that the red-giant branch in Gaia DR1 TGAS is populated about how you would expect from a simple star-formation history and stellar evolution tracks. This was surprising to me: The red clump is extremely prominent. This project involved building an approximate selection function for TGAS, which he has done, and released open-source!

John Brewer (Yale) showed relationships he has found between planet-system architectures and stellar chemical abundances. He cleverly designed complete samples of different kinds of planetary systems to make comparisons on a reasonable basis. He doesn't have a causal explanation or causal inference of what he is finding. But there are some very strong covariances of chemical-abundance ratios with system architectures. This makes me more excited than ever to come up with some kind of general description or parameterization of a bound few-body system that is good for inference.

I spent the afternoon at CIS 303, a middle school in the Bronx, as part of their Career and College Day. This is an opportunity for middle schoolers to discuss with people with a huge range of careers and backgrounds what they do and how they got there. So much fun. I also ran into Michael Blanton (NYU) at the event!

2017-05-23

mentoring and noise

Today I was only able to spend a small amount of time at a really valuable (and nicely structured) mentoring workshop run by Saavik Ford and CUNY faculty. The rest of the day I sprinted on my Exoplanet Research Program proposal, in which I am writing about terms in the extreme precision radial-velocity noise budget!

2017-05-22

quasi-periodic solar flares; TESS

In the morning, Daniela Huppenkothen (NYU) and I discussed Solar flares and other time-variable stellar phenomena with Chris Ick (NYU). He is going to help us take a more principled probabilistic approach to the question of whether flares contain quasi-periodic oscillations. He is headed off to learn about Gaussian Processes.

Armin Rest (STScI) was around today; he discussed image differencing with Dun Wang (NYU). After their discussions, we decided to make Wang's code easily installable, and get Rest to install and run it. Rest wants to have various image-differencing or transient-discovery pipelines running on the TESS data in real time (or as real as possible), and this could form the core of that. Excited!

2017-05-19

exploding white dwarfs

Abi Polin (Berkeley) came through NYU this week. Today she delivered a great seminar on explosions of white dwarfs. She is looking at different ignition mechanisms, and trying to predict the resulting supernovae spectra and light curves. This modeling requires a huge range of physics, including gastrophysics, nuclear reaction networks, and photospheres (both for absorption and emission lines). The current models have serious limitations (like one-d, which she intends to fix during her PhD), but they strongly suggest that type Ia supernovae (the ones that are created by white-dwarf explosions) do seem to come from a narrow mass range in white-dwarf mass. If you go too high in mass, you over-produce nickel. If you go too low in mass, you under-produce nickel and get way under-luminous. In addition to the NYU CCPP crew, Surabh Jha (Rutgers) and Armin Rest (STScI) were in the audience, so this talk was followed by a lively lunch! Jha suggested that the narrow mass range implied by the talk could also help with understanding the standard-candle-ness of these explosions.

2017-05-18

epoch of reionization

I had the realization that I can reduce my concerns about radial-velocity fitting (given a spectrum) to the problem of centroiding a single spectral line, and then scale up using information theory. So there is a paper to write! I sketched an abstract.

In the morning, Andrei Mesinger (SNS) gave a talk about the epoch of reionization. He argued fairly convincingly that, between Planck, Lyman-alpha emission from very high-redshift quasars and galaxies, and the growth of dark-matter structure, the epoch of reionization is pretty well constrained now, around redshift of 7 to 8. The principal observation (from my perspective) is that the optical depth to the surface of last scattering is close to the minimum possible value (given what we know out to redshifts of 5 or 6). He discussed also what we will learn from 21-cm projects, and—like Colin Hill a few weeks ago—is looking for the right statistics. I really have to start a project that finds decisive (and symmetry-constrained) summary statistics, given simulations!

2017-05-17

Nice; counter-rotating disks

At Stars group meeting, Keith Hawkins (Columbia) summarized the Nice meeting on Gaia. Some Gaia Sprint and Camp Hogg results were highlighted there in Anthony Brown's talk, apparently. There were results on Gaia accuracy of interest to us (and testable by us), and also things about the velocity distribution in the Galaxy halo.

Tjitske Starkenburg (Flatiron) talked about counter-rotating components in disk galaxies: She would like to find observational signatures that can be found in both simulations and also the data. But she also wants to understand their origins in the simulations. Interestingly she finds many different detailed formation histories that can lead to counter-rotating components. That is consistent with their high frequency in the observed samples.

2017-05-16

falsifying results by philosophical argument

I finally got some writing done today, in the Anderson paper on the empirical, deconvolved color-magnitude diagram. We are very explicitly structuring the paper around the assumptions, and each of the assumptions has a name. This is part of my grand plan to develop a good, repeatable, useful, and informative structure for a data-analysis paper.

I missed a talk last week by Andrew Pontzen (UCL), so I found him today and discussed matters of common interest. It was a wide-ranging conversation but two highlight were the following: We discussed causality or causal explanations in a deterministic-simulation setting. How could it be said that “mergers cause star bursts”? If everything is deterministic, isn't it equally true that star bursts cause mergers? One question is the importance of time or time ordering (or really light-cone ordering). For the statisticians who think about causality this doesn't enter explicitly. I think that some causal statements in galaxy evolution are wrong on philosophical grounds but we decided that maybe there is a way to save causality provided that we always refer to the initial conditions (kinematic state) on a prior light cone. Oddly, in a deterministic universe, causal explanations are mixed up with free will and subjective knowledge questions.

Another thing we discussed is a very neat trick he figured out to reduce cosmic variance in simulations of the Universe: Whenever you simulate from some initial conditions, also simulate from the negative of those initial conditions (all phases rotated by 180 degrees, or all over-densities turned to under, or whatever). The average of these two simulations will cancel out some non-trivial terms in the cosmic variance!

The day ended with a long call with Megan Bedell (Chicago), going over my full list of noise sources in extreme precision radial-velocity data (think: finding and characterizing exoplanets). She confirmed everything in my list, added a few new things, and gave me keywords and references. I think a clear picture is emerging of how we should attack (what NASA engineers call) the tall poles. However, it is not clear that the picture will get set down on paper in time for the Exoplanet Research Program funding call!

2017-05-15

exoplanets

Today not much! I had a valuable conversation with Trisha Hinners (NG Next) about machine-learning projects with the Kepler data, and I did some pen-and-paper writing and planning for my proposal on exoplanet-related extreme precision radial-velocity measurements.

2017-05-12

vertical action is a clock

Ruth Angus (Columbia) and I discussed the state of her hierarchical Bayesian model to self-calibrate a range of stellar age indicators. Bugs are fixed and it appears to be working. We discussed the structure of a Gibbs sampler for the problem. We reviewed work Angus and also Melissa Ness (MPIA) did at the 2016 NYC Gaia Sprint on vertical action dispersion as a function of stellar age. Beautiful results! We had an epiphany and decided that we have to publish these results, without waiting for the Bayesian inference to be complete. That is, we should publish a simple empirical paper based on TGAS, proposing the general point that vertical action provides a clock with very good properties: It is not precise, but it is potentially very accurate, because it is very agnostic about what kind of star it is is timing.

2017-05-11

cosmological anomalies

I had lunch with Jesse Muir (Michigan), and then she gave an informal seminar after lunch. She has been working on a number of things in cosmological measurement. One highlight is an investigation of the anomalies (strange statistical outliers or badly fit aspects) in the CMB: She has asked how they are related, and whether they are really independent. I discussed with her the possibility that we might be able to somehow lexicographically order all possible anomalies and then search for them in an ordered way, keeping track of all possible measurements and their outcomes, as a function of position in the ordering. The reason I am interested in this is because some of the anomalies are “odd enough” that I would expect them to come up pretty late in any ordering. That makes them not-that-anomalous! This somehow connects to p-values and p-hacking and so on. I also discussed with Muir the possibility of looking for anomalies in the large-scale structure. This should be an even richer playground.

2017-05-10

BHs in GCs, and a new job

In Stars group meeting, Ruth Angus (Columbia) showed her catalog of rotation periods in the Kepler and K2 fields. She has a huge number! We discussed visualizations of these that would be convincing and also possibly create new scientific leads.

Also in Stars group meeting, Arash Bahramian (MSU) spoke about black holes in globular clusters. He discussed how they use simultaneous radio and X-ray observations to separate the BHs from neutron stars: Radio reveals jet energy and X-ray reveals accretion energy, which (empirically) are different for BHs and NSs. However, in terms of making a data-driven model, the only situations in which you are confident that something is a NS is when you see X-ray bursts (because: surface effect) and the only situations in which you are confident that something is a BH is when you can see a dynamical mass substantially greater than 1.4 Solar masses (because: equation of state). He highlighted some oddities around the cluster Terzan 5, which is the globular cluster with the largest number of X-ray sources, and also an extremely high density and inferred stellar collision rate. This was followed by much discussion of relationships between collision rate and other cluster properties, and also some discussion of some individual x-ray sources.

[In non-research news: Today I became an employee of the Flatiron Institute, as a new group leader within the CCA! Prior to today I was only in a consulting role.]

2017-05-08

looking at the Sun, through the freakin' walls

In the CCPP Brown-Bag talk, Duccio Pappadopulo (NYU) gave a very nice and intuitive introduction to the strong CP problem (although he really presented it as the strong T problem!). He discussed the motivation for the QCD axion and then experimental bounds on it. He mentioned at the end his own work that permits the QCD axion to have much stronger couplings to photons, and therefore be much more readily detected in the laboratory. He discussed an important kind of experiment that I had not heard about previously: The helioscope, which is an x-ray telescope in a strong magnetic field, looking at the Sun, but inside a shielded building (search "axion helioscope"). That is, the experiment asks the question: Can we see through the walls? This tests the coupling of the QCD sector and the photon to the axion, because (QCD) axions are created in the Sun, and some will convert (using the magnetic field to obtain a free photon) into x-ray photons at the helioscope. Crazy, but seriously these are real experiments! I love my job.

2017-05-05

Dr Yuqian Liu

Today it was a pleasure to participate in the PhD defense of Yuqian Liu (NYU), who has exploited the world's largest dataset on stripped supernovae, part of the huge spectral collection of Maryam Modjaz's group at NYU. She pioneered various data-driven methods for the spectral analysis. One is to create a data-driven or empirical noise model using filtering in the Fourier domain. Another is to fit shifted and broadened lines using empirical spectra and bayesian inference. She uses these methods to automatically make uniform measurements of spectral features from very heterogeneous data from multiple sources of different levels of reliability. Her results rule out various (one might say: All!) physical models for these supernovae. Her results are all available open-source, and she has pushed her results into SNID, which is the leading software supernova classifier. Congratulations Dr Liu!

2017-05-04

asteroseismological estimators; and Dr Hahn!

Because of the availability of Dan Huber (Hawaii) in the city today, we moved Stars group meeting to Thursday! He didn't disappoint, telling us about asteroseismology projects in the Kepler and K2 data. He likes to emphasize that the >20,000 stars in the Kepler field that have measured nu-max and delta-nu have—every one of them—been looked at by (human) eye. That is, there is no fully safe automated method for measuring these. My loyal reader knows that this is a constant subject of conversation in group meeting, and has been for years now. We discussed developing better methods than what is done now.

In my mind, this is all about constructing estimators, which is something I know almost nothing about. I proposed to Stephen Feeney (Flatiron) that we simulate some data and play around with it. Sometimes good estimators can be inspired by fully Bayesian procedures. We could also go fully Bayes on this problem! We have the technology (now, with new Gaussian-Process stuff). But we anticipate serious slowness: We need methods that will work for TESS, which means they have to run on hundreds of thousands to millions of light curves.

In the afternoon, Chang Hoon Hahn (NYU) defended his PhD, which is on methods for making large-scale structure measurements. We have joked for many years that my cosmology group meeting is always and only about fiber collisions. (Fiber collisions: Hardware-induced configurational constraints on taking spectra or getting redshifts of galaxies that are close to one another on the sky.) This has usually been Hahn's fault, and he didn't let us down in his defense. Fiber collisions is a problem that seems like it should be easy and really, really is not. It is an easy problem to solve if you have an accurate cosmological model at small scales! But the whole point is that we don't. And in the future, when surveys use extremely complicated fiber positioners (instead of just drilling holes), the fiber-collision problem could become very severe. Very. As in: It might require knowing (accurately) very high-point functions of the galaxy distribution. More on this at some point: This problem has legs. But, in the meantime: Congratulations Dr Hahn!

2017-05-03

Kronos–Krios; photometric redshifts without training

In the early morning, Ana Bonaca (Harvard) and I discussed our information-theory project on cold stellar streams. We talked about generalizing our likelihood model or form, and what that would mean for the lower bound (on the variance of any unbiased estimator; the Cramér–Rao bound). I have homework.

At the Flatiron, instead of group meeting (which we moved to tomorrow), we had a meeting on the strange pair of stars that Semyeong Oh (Princeton) and collaborators have found, with very odd chemical differences. We worked through the figures for the paper, and all the alternative explanations for their formation, sharpening up the arguments. In a clever move, David Spergel (Flatiron) named them Kronos and Krios. More on why that, soon.

In the afternoon, in cosmology group meeting, Boris Leistedt (NYU) talked about his grand photometric-redshift plan, in which the templates and the redshifts are all estimated together in a beautiful hierarchical model. He plans to get photometric redshifts with no training redshifts whatsoever, and also no use of pre-set or known spectral templates (though he will compose the data-driven templates out of sensible spectral components). There was much discussion of the structure of the graphical model (in particular about selection effects). There was also discussion about doing low-level integrals fast or analytically.

2017-05-02

don't cross-correlate with the wrong template!

In principle, writing a funding proposal is supposed to give you an opportunity to reflect on your research program, think about different directions, and get new insights about projects not yet started. In practice it is a time of copious writing and anxiety, coupled with a lack of sleep! However, I have to admit that today my experience was the former: I figured out (in preparing my Exoplanet Research Program proposal for NASA) that I have been missing some very low-hanging fruit in my thinking about the the error budget for extreme precision radial-velocity experiments:

RVs are obtained (usually) by cross-correlations, and cross-correlations only come close to saturating the Cramér–Rao bound when the template spectrum is extremely similar to the true spectrum. That just isn't even close to true for most pipelines. Could this be a big term in the error budget? Maybe not, but it has the great property that I can compute it. That's unlike most of the other terms in the error budget! I had a call with Megan Bedell (Chicago) at the end of the day to discuss the details of this. (This also relates to things I am doing with Jason Cao (NYU).)

In other news, I spent time reading about linear algebra, (oddly) to brush up on some notational things I have been kicking around. I read about tensors in Kusse and Westwig and, in the end, I was a bit disappointed: They never use the transpose operator on vectors, which I think is a mistake. However, I did finally (duh) understand the difference between contravariant and covariant tensor components, and why I have been able to do non-orthonormal geometry (my loyal reader knows that I think of statistics as a sub-field of geometry) for years without ever worrying about this issue.

2017-05-01

Dr Sanford

I gave the CCPP Brown-Bag talk today, about how the Gaia mission works, according to my own potted story. I focused on the beautiful hardware design and the self-calibration.

Before that, Cato Sanford (NYU) defended his PhD, about model non-equilibrium systems in which there are swimmers (think: cells) in a homogenous fluid. He used a very simple Gaussian Process as the motive force for each swimmer, and then asked things like: Is there a pressure force on a container wall? Are there currents when the force landscape is non-trivial? And so on. His talk was a bit bio-stat-mech for my astrophysical brain, but I was stoked with the results, and I feel like the things we have done with Gaussian Processes might lead to intuitions in these crazy situations. The nice thing is that if you go from Brownian Motion to a GP-regulated walk, you automatically go out of equilibrium!

2017-04-29

after-Sloan-4 proposal writing, day 2

I violated house rules today and spent a Saturday continuing work from yesterday on the planning and organization of the AS4 proposal. We slowly walked through the whole proposal outline, assigning responsibilities for each section. We then walked through again, designing figures that need to be made, and assigning responsibilities for those too. It took all day! But we have a great plan for a great proposal. I'm very lucky to have this impressive set of colleagues.

2017-04-28

after-Sloan-4 proposal writing, day 1

Today was the first day of the AS-4 (After-Sloan-4) proposal-writing workshop, in which we started a sprint towards a large proposal for the Sloan Foundation. Very intelligently, Juna Kollmeier (OCIW) and Hans-Walter Rix (MPIA) started the meeting by having every participant give a long introduction, in which they not only said who they are and what they are interested in, but they also said what they thought the biggest challenges are in making this project happen. This took several hours, and got a lot of the big issues onto the table.

For me, the highlights of the day were presentations by Rick Pogge (OSU) and Niv Drory (Texas) about the hardware work that needs to happen. Pogge talked about the fiber positioning system, that will include robots, and a corrector, and a [censored] of a lot of sophisticated software (yes, I love this). It will reconfigure fast, to permit millions (something like 25 million) exposures (in five years) with short exposure times. Pogge really convinced me of the feasibility of what we are planning on doing, and delivered a realistic (but aggressive) timeline and budget.

Drory talked about the Local Volume Mapper, which mates a fiber-based IFU to a range of telescopes with different focal lengths (but same f-ratio) to make 3-d data cubes at different scales for different objects and different scientific objectives. It is truly a genius idea (in part because it is so simple). He showed us that they are really, really good at making close-packed fiber bundles, something they learned how to do with MaNGA.

It was a great day of serious argument, brutally honest discussion of trade-offs, and task lists for a hard proposal-writing job ahead.

2017-04-26

void–galaxy cross-correlations, stellar system encounters

Both Flatiron group meetings were great today. In the first, Nathan Leigh (AMNH) Spoke about collisions of star systems (meaning 2+1 interactions, 2+2, 2+3, and 3+3), using collisionless dynamics and the sticky star approximation (to assess collisions). He finds a simple scaling of collision probabilities in terms of combinatorics; that is, the randomness or chaos is efficient, or more efficient than you might think. The crowd had many questions about scattering in stellar systems and equipartition.

This led to a wider discussion of dynamical scattering. We asked the question: Can we learn about dynamical heating in stellar systems by looking at residual exoplanet populations (for example, if the heating is by close encounters by stars, systems should be truncated)? We concluded that wide separation binaries are probably better tracers from the perspective that they are easier to see. Then we asked: Can the Sun's own Oort cloud be used to measure of star-star interactions? And: Are there interstellar comets? David Spergel (Flatiron) pointed out the (surprising, to me) fact that there are no comets on obviously hyperbolic orbits.

Raja Guhakathurta (UCSC) is in town; he showed an amazing video zooming in to a tiny patch of Andromeda’s disk. He discussed Julianne Dalcanton’s dust results in M31 (on which I am a co-author). He then showed us detailed velocity measurements he has made for 13,000 (!) stars in the M31 disk. He finds the velocity dispersion of the disk grows with age, and grows faster and to larger values than in the Milky-Way disk. That led to more lunch-time speculation.

In the cosmology meeting, Shirley Ho (CMU) spoke about large-scale structure and machine learning. She asked the question: Can we use machine learning to compare simulations to data? In order to address this, she is doing a toy project: Compare simulations to simulations. Finds that a good conv-net does as well as the traditional power-spectrum analysis. This led to some productive discussion of where machine learning is most valuable in cosmology. Ben Wandelt (Paris) hypothesized that a machine-learning emulator can’t beat an n-body simulation. I disagreed (though on weak grounds)! We proposed that we set up a challenge of some kind, very well specified.

Ben Wandelt then spoke about linear inverse problems, on which he is doing very creative and promising work. He classified foreground approaches (for LSS and CMB) into Avoid or Adapt or Attack. On Avoid: He is using a low-rank covariance constraint to find foregrounds (This capitalizes on smooth wavelength (frequency) dependences, but reduces detailed assumptions). He showed that this separates signal and foreground—by the signal being high-rank and CDM-like (isotropic, homogeneous, etc), while the foreground is low rank (smooth in wavelength space). He then switched gears and showed us an amazingly high signal-to-noise void–galaxy cross-correlation function. We discussed how the selection affects the result. The cross-correlation is strongly negative at small separations and shows an obvious Alcock–Paczynski effect. David Spergel asked: Since this is an observation of “empty space”, does it somehow falsify modified GR or radical particle things?

2017-04-25

Dr Geoff Ryan

Today Geoff Ryan (NYU) defended his PhD. I wrote a few things about his work here last week and he did not disappoint in the defense. The key idea I take from his work is: In an axisymmetric system (axisymmetric matter distribution and axisymmetric force law), material will not accrete without viscosity; it will settle into an incredibly long-lived disk (like Saturn's rings!). This problem has been solved by adding viscosity (artificially, but we do expect effective sub-grid viscosity from turbulence and magnetic fields), but less has been done about non-axisymmetry. Ryan shows that in the case of a binary system (this generates the non-axisymmetry), accretion can be driven without any viscosity. That's important and deep. He also talked about numerics, and also about GRB afterglows. It was a great event and we will be sad to see him go.

2017-04-24

hypothesis testing and marginalization

I had a valuable chat in the morning with Adrian Price-Whelan (Princeton) about some hypothesis testing, for stellar pairs. The hypotheses are: unbound and unrelated field stars, co-moving but unbound, and comoving because bound. We discussed this problem as a hypothesis test, and also as a parameter estimation (estimating binding energy and velocity difference). My position (that my loyal reader knows well) is that you should never do a hypothesis test when you can do a parameter estimation.

A Bayesian hypothesis test involves computing fully marginalized likelihoods (FMLs). A parameter estimation involves computing partially marginalized posteriors. When I present this difference to Dustin Lang (Toronto), he tends to say “how can marginalizing out all but one of your parameters be so much easier than marginalizing out all your parameters?”. Good question! I think the answer has to do with the difference between estimating densities (probability densities that integrate to unity) and estimating absolute probabilities (numbers that sum to unity). But I can't quite get the argument right.

In my mind, this is connected to an observation I have seen over at Andrew Gelman's blog more than once: When predicting the outcome of a sporting event, it is much better to predict a pdf over final scores than to predict the win/loss probability. This is absolutely my experience (context: horse racing).

2017-04-21

the last year of a giant star's life

Eliot Quataert (Berkeley) gave the astrophysics seminar today. He spoke about the last years-to-days in the lifetime of a massive star. He is interested in explaining the empirical evidence that suggests that many of these stars cough out significant mass ejection events in the last years of their lives. He has mechanisms that involve convection in the core driving gravity (not gravitational) waves in the outer parts that break at the edge of the star. His talk touched on many fundamental ideas in astrophysics, including the conditions under which an object can violate the Eddington luminosity. For mass-loss driven (effectively) by excess luminosity, you have to both exceed (some form of) the Eddington limit and deposit energy high enough up in the star's radius that there is enough total energy (luminosity times time) to unbind the outskirts. His talk also (inadvertently) touched on some points of impedance matching that I am interested in. Quataert's research style is something I admire immensely: Very simple, very fundamental arguments, backed up by very good analytic and computational work. The talk was a pleasure!

After the talk, I went to lunch with Daniela Huppenkothen (NYU), Jack Ireland (GSFC), and Andrew Inglis (GSFC). We spoke more about possible extensions of things they are working on in more Bayesian or more machine-learning directions. We also talked about the astrophysics Decadal process, and the impacts this has on astrophysics missions at NASA and projects at NSF, and comparisons to similar structures in the Solar world. Interestingly rich subject there.

2017-04-20

Solar data

In the morning, Jack Ireland (GSFC) and Andrew Inglis (GSFC) gave talks about data-intensive projects in Solar Physics. Ireland spoke about his Helioviewer project, which is a rich, multi-modal, interactive interface to the multi-channel, heterogeneous, imaging, time-stream, and event data on the Sun, coming from many different missions and facilities. It is like Google Earth for the Sun, but also with very deep links into the raw data. This project has made it very easy for scientists (and citizen scientists) from all backgrounds to interact with and obtain Solar data.

Inglis spoke about his AFINO project to characterize all Solar flares in terms of various time-series (Fourier) properties. He is interested in very similar questions for Solar flares that Huppenkothen (NYU) is interested in for neutron-star and black-hole transients. Some of the interaction during the talk was about different probabilistic approaches to power-spectrum questions in the time domain.

Over lunch I met with Ruth Angus (Columbia) to consult on her stellar chronometer projects. We discussed bringing in vertical action (yes, Galactic dynamics) as a stellar clock or age indicator. It is an odd indicator, because the vertical action (presumably) random-walks with time. This makes it a very low-precision clock! But it has many nice properties, like that it works for all classes of stars (possibly with subtleties), in our self-calibration context it connects age indicators of different types from different stars, and it is good at constraining old ages. We wrote some math and discussed further our MCMC sampling issues.

2017-04-19

after SDSS-IV; red-clump stars

At Stars group meeting, Juna Kollmeier (OCIW) spoke about the plans for the successor project to SDSS-IV. It will be an all-sky spectroscopic survey, with 15 million spectroscopic visits, on 5-ish million targets. The cadence and plan are made possible by advances in robot fiber positioning, and The Cannon, which permits inferences about stars that scale well with decreasing signal-to-noise ratio. The survey will use the 2.5-m SDSS telescope in the North, and the 2.5-m du Pont in the South. Science goals include galactic archaeology, stellar systems (binaries, triples, and so on), evolved stars, origins of the elements, TESS scientific support and follow-up, and time-domain events. The audience had many questions about operations and goals, including the maturity of the science plan. The short story is that partners who buy in to the survey now will have a lot of influence over the targeting and scientific program.

Keith Hawkins (Columbia) showed his red-clump-star models built on TGAS and 2MASS and WISE and GALEX data. He finds an intrinsic scatter of about 0.17 magnitude (RMS) in many bands, and, when the scatter is larger, there are color trends that could be calibrated out. He also, incidentally, infers a dust reddening for every star. One nice result is that he finds a huge dependence of the GALEX photometry on metallicity, which has lots of possible scientific applications. The crowd discussed the extent to which theoretical ideas support the standard-ness of RC stars.

2017-04-18

Dr Vakili

The research highlight of the day was a beautiful PhD defense by my student MJ Vakili (NYU). Vakili presented two big projects from his thesis: In one, he has developed fast mock-catalog software for understanding cosmic variance in large-scale structure surveys. In the other, he has built and run an inference method to learn the pixel-convolved point-spread function in a space-based imaging device.
In both cases, he has good evidence that his methods are the best in the world. (We intend to write up the latter in the Summer.) Vakili's thesis is amazingly broad, going from pixel-level image processing work that will serve weak-lensing and other precise imaging tasks, all the way up to new methods for using computational simulations to perform principled inferences with cosmological data sets. He was granted a PhD at the end of an excellent defense and a lively set of arguments in the seminar room and in committee. Thank you, MJ, for a great body of work, and a great contribution to my scientific life.

2017-04-17

accretion onto binary black holes

I talked to Ana Bonaca (Harvard) and Lauren Anderson (Flatiron) about their projects in the morning. With Bonaca I discussed the computation of numerically stable derivatives with respect to parameters. This is not a trivial problem when the model (of which you are taking derivatives) is itself a simulation or computation. With Anderson we edited and prioritized the to-do list to finish writing the first draft of her paper.

At lunch time, Geoff Ryan (NYU) gave the CCPP brown-bag talk, about accretion modes for binary black holes. Because the black holes orbit in a cavity in the circum-binary accretion disk, and then are fed by a stream (from the inner edge of the cavity), there is an unavoidable creation of shocks, either in transient activity or in steady state. He analyzed the steady-state solution, and finds that the shocks drive accretion. It is a beautiful model for accretion that does not depend in any way on any kind of artificial or sub-grid viscosity.

2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!

2017-04-11

self-calibration

I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!

2017-04-10

causal photometry

Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.

I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.

2017-04-07

searches for cosmological estimators

I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.

In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.

2017-04-06

direct detection of the cosmic neutrino background

Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:

Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!

Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.

One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).

2017-04-05

a stellar stream with only two stars? And etc

In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).

Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.

Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.

In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.

2017-04-04

words on a plane

On the plane home, I worked on my similarity-of-vectors (or stellar twins) document. I got it to the first-draft stage.

2017-04-03

how to add and how to subtract

My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)

On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)

2017-04-02

John Bahcall (and etc)

I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.

Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.

The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!

(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.

2017-03-31

the future of astrophysical data analysis

Dan Foreman-Mackey (UW) crashed NYC today, surprising me, and disrupting my schedule. We began our day by arguing about the future of hierarchical modeling. His position is (sort-of) that the future is not hierarchical Bayes as it is currently done, but rather that we will be doing things that are much more ABC-like. That is, astrophysics theory is (generally) computational or simulation-based, and the data space is far too large for us to understand densities or probabilities in the data space. So we need ways to responsibly use simulations in inference. Right now the leading method is what is called (dumbly) ABC. I asked: So, are we going to do CMB component separation at the pixel level with ABC? This seems impossible at the present day, and DFM's pointed out that ABC is best when precision requirements are low. When precision requirements are high, there aren't really options that have computer simulations inside the inference loop!

Many other things happened today. I spent time with Lauren Anderson (Flatiron), validating and inspecting the output of our parallax inferences. I spent a phone call with Fed Bianco (NYU) talking about how to adapt Gaussian Processes to make models of supernovae light curves. And Foreman-Mackey and I spent time talking about linear algebra, and also this blog post, with which we more-or-less agree (though perhaps it doesn't quite capture all the elements that contribute (positively and negatively) to the LTFDFCF of astronomers!).

2017-03-30

linear algebra; huge models

I had a long conversation today with Justin Alsing (Flatiron) about hierarchical Bayesian inference, which he is thinking about (and doing) in various cosmological contexts. He is thinking about inferring a density field that simultaneously models the galaxy structures and the weak lensing, to do a next-generation (and statistically sound) lensing tomography. His projects are amazingly sophisticated, and he is not afraid of big models. We also talked about using machine learning to do emulation of expensive simulations, initial-conditions reconstruction in cosmology, and moving-object detection in imaging.

I also spent time playing with my linear algebra expressions for my document on finding identical stars. Some of the matrices in play are low-rank; so I ought to be able to either simplify my expressions or else simplify the number of computational steps. Learning about my limitations, mathematically! One thing I re-discovered today is how useful it is to use the Kusse & Westwig notation and conceptual framework for thinking about hermitian matrices and linear algebra.

2017-03-29

ergodic stars; blinding and pre-registration

In the Stars group meeting, Nathan Leigh (AMNH) and Nick Stone (Columbia) spoke about 4-body scattering or 2-on-2 binary-binary interactions. These can lead to 3-1, 2-2, and 2-1-1 outcomes, with the latter being most common. They are using a fascinating and beautiful ergodic-hypothesis-backed method (constrained by conservation laws) to solve for the statistical input–output relations quasi-analytically. This is a beautiful idea and makes predictions about star-system evolution in the Galaxy.

In the Cosmology group meeting, Alex Malz (NYU) led a long and wide-ranging discussion of blinding (making statistical results more reliable by pre-registering code or sequestering data). The range of views in the room was large, but all agreed that you need to be able to do exploratory data analysis and also protect against investigator bias. My position is we better be doing some form of blinding for our most important questions, but I also think that we need to construct these methods to permit people to play with the data and permit public data releases that are uncensored and unmodified. One theme which came up is that astronomy's great openness is a huge asset here. Fundamentally we are protected (in part) by the availability of the data to re-analysis.

2017-03-28

Local Group on the Local Group

Today, Kathryn Johnston (Columbia) organized a “Local Group on the Local Group” meeting at Columbia. Here are some highlights:

Lauren Anderson (Flatiron) gave an update on her data-driven model of the color–magnitude diagram of stars. This led to a conversation about which features in her deconvolved CMD are real? And are there too many red-clump stars given the total catalog size?

Steven Mohammed (Columbia) showed our GALEX Galactic-Plane survey data on the Gaia TGAS stars. The GALEX colors look very sensitive to metallicity and possibly other abundances. The audience suggested that we look at the full dependences on metallicity and temperature and surface gravity to see if we can break all degeneracies. This led to more discussion of the use of the Red Clump stars for Galactic science.

Adrian Price-Whelan (Columbia) presented a puzzle about the Galactic globular cluster system, which he has been thinking about. Are the distant clusters accreted? The in-situ formation hypothesis is unpalatable (it had to be many clusters at early times; should be many thin streams); the accreted hypothesis over-produces the smooth component of the stellar halo (unless dwarf galaxies had far more GCs per unit stellar mass in the past). These problems can be resolved, but only with strong predictions.

Yong Zheng (Columbia) spoke about the gaseous Magellanic stream and associated (or plausibly associated) high-velocity clouds. Many of the challenges in interpretation connect to the problem that we don't know where the gas is along the line of sight. She showed really nice data on something called Wright’s Cloud. For this huge structure—and for the stream as a whole—there is little to no associated stellar component.

Nicola Amorisco (Harvard) Showed theoretical simulations of the accreted part of the MW (and MW-like-galaxy) halo, with the goal of finding stellar-halo observables that strongly co-vary with the assembly history of the dark-matter halo. Both theory and observations suggest large scatter in halo properties at Milky-Way-like masses, and much less scatter at higher masses (because of central-limit-like considerations). His results are promising for understanding the MW assembly history.

Glennys Farrar (NYU) spoke about the MW magnetic field, using rotation measures and CMB to constrain the model. She showed UHECR deflections in the inferred magnetic field, and also discussed implications of her results for electron and cosmic-ray diffusion. There are also tantalizing implications for the synchrotron spectrum and CMB component separation. One interesting comment: If her results are right for the scale and amplitude of the field, there are serious questions about origin and generation; is it primordial or generated on scales much larger than the galaxy?

2017-03-27

identical, statistically speaking

My research activity today was to re-write, from scratch (well, I really started yesterday) my document on how you tell whether two noisily measured objects are identical. This is an old and solved problem! But I am writing the answer in astronomer-friendly form, with a few astronomy-related twists. I have no idea whether this is a paper, a section of a paper, or something else. My re-write was caused by the algebra I learned from Leistedt, and the customers (so to speak) of the document are Rix, Ness, and Hawkins, all of whom are thinking about finding identical pairs of stars.

2017-03-24

how to write an April Fools' paper

I had a great visit to the University of Toronto Department of Astronomy and Astrophysics (and Dunlap Institute) today. I had great conversations about scintillometry (new word?) and the future of likelihood functions and component separation in the CMB. I also discussed pairwise velocity differences in cosmology, and probabilistic supernova classification. There is lots going on. I gave my talk on The Cannon, in which I was perhaps way too pessimistic about chemical tagging!

Early in the day, I ate Toronto-style (no, not Montreal-style) bagels with Dustin Lang (Toronto) and discussed many of the things we like to discuss, like finding very faint outer-Solar-System objects in all the data Lang wrangles, like the differences between accuracy and precision, and even how to define accuracy in astrophysics, and like April Fools' papers, which have to meet four criteria:

  1. conceptually interesting inference
  2. extremely challenging computation
  3. no long-term scientific value to the specific results found
  4. non-irrelevant mention of April 1 in abstract
It is a brutal set of requirements but we have met them two times. I think this year is out (because of criterion 2), but maybe 2018?

2017-03-23

math with Gaussians

My one piece of research news today was an email exchange with Boris Leistedt (NYU) in which he completely took me to school on math with Gaussians. My intuition (expressed this week) that there was an easier way to do all the operations I was doing was right! But everything else I was doing was not wrong but wrong-headed. Anyways, this should simplify some things right away. The key observation is that a product of Gaussians can be transformed into another product of Gaussians, in another basis, trivially. More soon!

2017-03-22

circular reasoning, continuity of globular clusters

In Stars group meeting, Lauren Anderson (Flatiron) showed our toy example that demonstrates why our method for de-noising the Gaia TGAS data works. That led to some useful conversation that might help us explain our project better. I didn't take all the notes I should have! One idea that came up is that if there are two populations, one only seen at very low signal-to-noise, then that second population can easily get pulled in to the first. Another is the question of the circularity of the reasoning. Technically, our reasoning is circular, but it wouldn't be if we marginalized out the hyper-parameters (that is, the parameters of our color–magnitude diagram).

Also in the Stars meeting, Ruth Angus (Columbia) suggested how we might responsibly look for the differences in exoplanet populations with stellar age. And Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) described their very successful observing run to follow up the comoving stellar pairs. Preliminary analyses suggest that many of the pairs (which we found only with transverse information) are truly comoving.

In Cosmology group meeting, Jeremy Tinker discussed the possibility of using halo-occupation-like approaches to determine how the globular cluster populations of galaxies form and evolve. This led to a complicated and long discussion, with many ideas and issues arising. I do think that various simple scenarios could be ruled out, making use of some kind of continuity argument (with sources and sinks, of course).

I spent some time hidden away working on multiplying and integrating Gaussians. I am doing lots of algebra, completing squares. I have the tiniest suspicion that there is an easier way, or that all of the math I am doing has a simple answer at the end, that I could have seen before starting?

2017-03-21

half-pixel issues; building our own Gibbs sampler

First thing in the morning I met with Steven Mohammed (Columbia) and Dun Wang (NYU) to discuss GALEX calibration and imaging projects. Wang has a very clever astrometric calibration of the satellite, built by cross-correlating photons with the positions of known stars. This astrometric calibration depends on properties of the photons for complicated reasons that relate to the detector technology on board the spacecraft. Mohammed finds, in an end-to-end test of Wang's images, that there might be half-pixel issues in our calibration. We came up with methods for tracking that down.

Late in the day, I met with Ruth Angus (Columbia) to discuss the engineering in her project to combine all age information (and self-calibrate all methods). We discussed how to make a baby test where we can do the sampling with technology we are good at, before we write a brand-new Gibbs sampler from scratch. Why, you might ask, would any normal person write a Gibbs sampler from scratch when there are so many good packages out there? Because you always learn a lot by doing it! If our home-built Gibbs doesn't work well, we will adopt a package.

2017-03-20

statistics questions

I spent time today writing in the method section of the Anderson et al paper. I realized in writing it that we have been thinking about our model of the color–magnitude diagram as being a prior on the distance or parallax. But it isn't really, it is a prior on the color and magnitude, which for a given noisy, observed star, becomes a prior on the parallax. We will compute these implicit priors explicitly (it is a different prior for every star) for our paper output. We have to describe this all patiently and well!

At some point during the day, Jo Bovy (Toronto) asked a very simple question about statistics: Why does re-sampling the data (given presumed-known Gaussian noise variances in the data space) and re-fitting deliver samples of the fit parameters that span the same uncertainty distribution as the likelihood function would imply? This is only true for linear fitting, of course, but why is it true (and no, I don't mean what is the mathematical formula!)? My view is that this is (sort-of) a coincidence rather than a result, especially since it (to my mind) confuses the likelihood and the posterior. But it is an oddly deep question.

2017-03-16

a prior on the CMD isn't a prior on distance, exactly

Today my research time was spent writing in the paper by Lauren Anderson (Flatiron) about the TGAS color–magnitude diagram. I think of it as being a probabilistic inference in which we put a prior on stellar distances and then infer the distance. But that isn't correct! It is an inference in which we put a prior on the color–magnitude diagram, and then, given noisy color and (apparent) magnitude information, this turns into an (effective, implicit) prior on distance. This Duh! moment led to some changes to the method section!

2017-03-15

what's in an astronomical catalog?

The stars group meeting today wandered into dangerous territory, because it got me on my soap box! The points of discussion were: Are there biases in the Gaia TGAS parallaxes? and How could we use proper motions responsibly to constrain stellar parallaxes? Keith Hawkins (Columbia) is working a bit on the former, and I am thinking of writing something short with Boris Leistedt (NYU) on the latter.

The reason it got me on my soap-box is a huge set of issues about whether catalogs should deliver likelihood or posterior information. My view—and (I think) the view of the Gaia DPAC—is that the TGAS measurements and uncertainties are parameters of a parameterized model of the likelihood function. They are not parameters of a posterior, nor the output of any Bayesian inference. If they were outputs of a Bayesian inference, they could not be used in hierarchical models or other kinds of subsequent inferences without a factoring out of the Gaia-team prior.

This view (and this issue) has implications for what we are doing with our (Liestedt, Hawkins, Anderson) models of the color–magnitude diagram. If we output posterior information, we have to also output prior information for our stuff to be used by normals, down-stream. Even with such output, the results are hard to use correctly. We have various papers, but they are hard to read!

One comment is that, if the Gaia TGAS contains likelihood information, then the right way to consider its possible biases or systematic errors is to build a better model of the likelihood function, given their outputs. That is, the systematics should be created to be adjustments to the likelihood function, not posterior outputs, if at all possible.

Another comment is that negative parallaxes make sense for a likelihood function, but not (really) for a posterior pdf. Usually a sensible prior will rule out negative parallaxes! But a sensible likelihood function will permit them. The fact that the Gaia catalogs will have negative parallaxes is related to the fact that it is better to give likelihood information. This all has huge implications for people (like me, like Portillo at Harvard, like Lang at Toronto) who are thinking about making probabilistic catalogs. It's a big, subtle, and complex deal.

2017-03-14

snow day

[Today was a NYC snow day, with schools and NYU closed, and Flatiron on a short day.] I made use of my incarceration at home writing in the nascent paper about the TGAS color–magnitude diagram with Lauren Anderson (Flatiron). And doing lots of other non-research things.

2017-03-13

toy problem

Lauren Anderson (Flatiron) and I met early to discuss a toy model that would elucidate our color–magnitude diagram model project. Context is: We want to write a section called “Why the heck does this work?” in our paper. We came up with a model so simple, I was able to implement it during the drinking of one coffee. It is, of course, a straight-line fit (with intrinsic width, then used to de-noise the data we started with).

planning a paper sprint, completing a square

Lauren Anderson (Flatiron) are going to sprint this week on her paper on the noise-deconvolved color–magnitude diagram from the overlap of Gaia TGAS, 2MASS, and the PanSTARRS 3-d dust map. We started the day by making a long to-do list for the week, that could end in submission of the paper. My first job is to write down the data model for the data release we will do with the paper.

At lunch time I got distracted by my project to find a better metric than chi-squared to determine whether two noisily-observed objects (think: stellar spectra or detailed stellar abundance vectors) are identical or indistinguishable, statistically. The math involved completing a huge square (in linear-algebra space) twice. Yes, twice. And then the result is—in a common limit—exactly chi-squared! So my intuition is justified, and I know where it will under-perform.

2017-03-10

the Milky Way halo

At the NYU Astro Seminar, Ana Bonaca (Harvard) gave a great talk, about trying to understand the dynamics and origin of the Milky Way halo. She has a plausible argument that the higher-metallicity halo stars are the halo stars that formed in situ and migrated out, while the lower-metallicity stars were accreted. If this holds up, I think it will probably test a lot of things about the Galaxy's formation, history, and dark-matter distribution. She also talked about stream fitting to see the dark-matter component.

On that note, we started a repo for a paper on the information theory of cold stellar streams. We re-scoped the paper around information rather than the LMC and other peculiarities of the Local Group. Very late in the day I drafted a title and abstract. This is how I start most projects: I need to be able to write a title and abstract to know that we have sufficient scope for a paper.

2017-03-09

The Cannon and APOGEE

I discussed some more the Cramér-Rao bound (or Fisher-matrix) computations on cold stellar streams being performed by Ana Bonaca (Harvard). We discussed how things change as we increase the numbers of parameters, and designed some possible figures for a possible paper.

I had a long phone call with Andy Casey (Monash) about The Cannon, which is being run inside APOGEE2 to deliver parameters in a supplemental table in data release 14. We discussed issues of flagging stars that are far from the training set. This might get strange in high dimensions.

In further APOGEE2 and The Cannon news, I dropped an email on the mailing lists about the radial-velocity measurements that Jason Cao (NYU) has been making for me and Adrian Price-Whelan (Princeton). His RV values look much better than the pipeline defaults, which is perhaps not surprising: The pipeline uses some cross-correlation templates, while Cao uses a very high-quality synthetic spectrum from The Cannon. This email led to some useful discussion about other work that has been done along these lines within the survey.

2017-03-08

does the Milky Way disk have spiral structure?

At stars group meeting, David Spergel (Flatiron) was tasked with convincing us (and Price-Whelan and I are skeptics!) that the Milky Way really does have spiral arms. His best evidence came from infrared emission in the Galactic disk plane, but he brought together a lot of relevant evidence, and I am closer to being convinced than ever before. As my loyal reader knows, I think we ought to be able to see the arms in any (good) 3-d dust map. So, what gives? That got Boris Leistedt (NYU), Keith Hawkins (Columbia), and me thinking about whether we can do this now, with things we have in-hand.

Also at group meeting, Semyeong Oh (Princeton) showed a large group-of-groups she has found by linking together co-moving pairs into connected components by friends-of-friends. It is rotating with the disk but at a strange angle. Is it an accreted satellite? That explanation is unlikely, but if it turns out to be true, OMG. She is off to get spectroscopy next week, though John Brewer (Yale) pointed out that he might have some of the stars already in his survey.

2017-03-07

finding the dark matter with streams

Today was a cold-stream science day. Ana Bonaca (Harvard) computed derivatives today of stream properties with respect to a few gravitational-potential parameters, holding the present-day position and orientation of the stream fixed. This permits computation of the Cramér-Rao bound on any inference or estimate of those parameters. We sketched out some ideas about what a paper along these lines would look like. We can identify the most valuable streams, the streams most sensitive to particular potential parameters, the best combinations of streams to fit simultaneously, and the best new measurements to make of existing streams.

Separately from this, I had a phone conversation with Adrian Price-Whelan (Princeton) about the point of doing stream-fitting. It is clear (from Bonaca's work) that fitting streams in toy potentials is giving us way-under-estimated error bars. This means that we have to add a lot more potential flexibility to get more accurate results. We debated the value of things like basis-function expansions, given that these are still in the regime of toy (but highly parameterized toy) models. We are currently agnostic about whether stream fitting is really going to reveal the detailed properties of the Milky Way's dark-matter halo. That is, for example, the properties that might lead to changes in what we think is the dark-matter particle.

2017-03-06

LMC effect on streams; dust corrections

Ana Bonaca (Harvard) showed up for a week of (cold) stellar streams inference. Our job is either to resurrect her project to fit multiple streams simultaneously, or else choose a smaller project to hack on quickly. One thing we have been discussing by email is the influence of the LMC (and SMC and M31 and so on) on the streams. Will it be degenerate with halo quadrupole or other parameters? We discussed how we might answer this question without doing full probabilistic inferences: In principle we only need to take some derivatives. This is possible, because Bonaca's generative stream model is fast. We discussed the scope of a minimum-scope paper that looks at these things, and Bonaca started computing derivatives.

Lauren Anderson (Flatiron) and I looked at her dust estimates for the stars in Gaia DR1 TGAS. She is building a model of the color–magnitude diagram with an iterative dust optimization: At zeroth iteration, the distances are (generally) over-estimated; we dust-correct, fit the CMD, and re-estimate distances. Then we re-estimate dust corrections, and do it again. The dust corrections oscillate between over- and under-corrections as the distances oscillate between over- and under-estimates. But it does seem to converge!

2017-03-03

similarities of stars; getting started in data science

I met with Keith Hawkins (Columbia) in the morning, to discuss how to find stellar pairs in spectroscopy. I fundamentally advocated chi-squared difference, but with some modifications, like masking things we don't care about, removing trends on length-scales (think: continuum) that we don't care about, and so on. I noted that there are things to do that are somewhat better than chi-squared difference, that relate to either hypothesis testing or else parameter estimation. I promised him a note about this, and I also owe the same to Melissa Ness (MPIA), who has similar issues but in chemical-abundance (rather than purely spectral) space. Late in the day I worked on this problem over a beer. I think there is a very nice solution, but it involves (as so many things like this do) a non-trivial completion of a square.

In the afternoon, I met with my undergrad-and-masters research group. Everyone is learning how to install software, and how to plot spectra, light curves, and rectangular data. We talked about projects with the Boyajian Star, and also with exoplanets in 1:1 resonances (!).

2017-03-02

D. E. Shaw

The research highlight of my day was a trip to D. E. Shaw, to give an academic seminar (of all things) on extra-solar planet research. I was told that the audience would be very mathematically able and familiar with physics and engineering, and it was! I talked about the stationary and non-stationary Gaussian Processes we use to model stellar (stationary) and spacecraft (non-stationary) variability, how we detect exoplanet signals by brute-force search, and how we build and evaluate hierarchical models to learn the full population of extra-solar planets, given noisy observations. The audience was interactive and the questions were on-point. Of course many of the things we do in astrophysics are not that different—from a data-analysis perspective—from things the hedge funds do in finance. I spent my time with the D. E. Shaw trying to understand the atmosphere in the firm. It seems very academic and research-based, and (unlike at many banks), the quantitative researchers run the show.

2017-03-01

fitting stellar spectra and deblending galaxy images

Today was group meetings day. In the Stars meeting, John Brewer (Yale) told us about fitting stellar spectra with temperature, gravity, and composition, epoch-by-epoch for a multi-epoch radial-velocity survey. He is trying to understand how consistent his fitting is, what degeneracies there are, and whether there are any changes in temperature or gravity that co-vary with radial-velocity jitter. No results yet, but we had suggestions for tests to do. His presentation reinforced my idea (with Megan Bedell) to beat spectral variations against asteroseismological oscillation phase.

In the Cosmology meeting, Peter Melchior (Princeton) told us about attempts to turn de-blending into a faster and better method that is appropriate for HSC and LSST-generation surveys. He blew us away with a tiny piece of deep HSC imaging, and then described a method for deblending that looks like non-negative matrix factorization, plus convex regularizations. He has done his research on the mathematics around convex regularizations, reminding me that we should do a more general workshop on these techniques. We discussed many things in the context of Melchior's project; one interesting point is that the deblending problem doesn't necessarily require good models of galaxies (Dustin Lang and I always think of it as a modeling problem); it just needs to deliver a good set of weights for dividing up photons.

2017-02-28

#DtU17, day two

Today I dropped in on Detecting the Unexpected in Baltimore, to provide a last-minute talk replacement. In the question period of my talk, Tom Loredo (Cornell) got us talking about precision vs accuracy. My position is a hard one: We never have ground truth about things like chemical abundances of stars; every chemical abundance is a latent variable; there is no external information we can use to determine whether our abundance measurements are really accurate. My view is that a model is accurate only inasmuch as it makes correct predictions about qualitatively different data. So we are left with only precision for many of our questions of greatest interest. More on this in some longer form, later.

Highlights (for me; very subjective) of the days' talks were stories about citizen science. Chris Lintott (Oxford) told us about tremendous lessons learned from years of Zooniverse, and the non-trivial connections between how you structure a project and how engaged users will become. He also talked about a long-term vision for partnering machine learning and human actors. He answered very thoughtfully a question about the ethical aspects of crowd-sourcing. Brooke Simmons (UCSD) showed us how easy it is to set up a crowd-sourcing project on Zooniverse; they have built an amazingly simple interface and toolkit. Steven Silverberg (Oklahoma) told us about Disk Detective and Julie Banfield (ANU) told us about Radio Galaxy Zoo. They both have amazing super-users, who have contributed to published papers. In the latter project, they have found (somewhat serendipitously) the largest radio galaxy ever found! One take-away from my perspective is that essentially all of the discoveries of the Unexpected have happened in the forums—in the deep social interaction parts of the citizen-science sites.

2017-02-27

galaxy masses; text as data

After a morning working on terminology and notation for the color–magnitude diagram model paper with Lauren Anderson (Flatiron), I went to two seminars. The first was Jeremy Tinker (NYU) talking about the relationship between galaxy stellar mass and dark-matter halo mass as revealed by fitting of number-count and clustering data in large-scale structure simulations. He finds that only models with extremely small scatter (less—maybe far less—than 0.18 dex) are consistent with the data, and that the result is borne out by follow-ups with galaxy–galaxy lensing and other tests. This is very hard to understand within any realistic model for how galaxies form, and constitutes a new puzzle for standard cosmology plus gastrophysics.

In the afternoon there was a very wide-ranging talk by Mark Drezde (JHU) on data-science methods for social science, intervention in health issues, and language encoding. He is interested in taking topic models and either deepening them (to make better features) or else enriching their probabilistic structure. It is all very promising, though these subjects are—despite their extreme mathematical sophistication—in their infancy.

2017-02-26

one paragraph per day

[I have been on vacation for a week.]

All I have done in the last week is (fail to) keep up with email (apologies y'all) and write one paragraph per day in the nascent paper with Lauren Anderson (Flatiron) about our data-driven model of the color–magnitude diagram. The challenge is to figure out what to emphasize: the fact that we de-noise the parallaxes, or the fact that we can extend geometric parallaxes to more distant stars, or the fact that we don't need stellar models?

2017-02-17

57 elements; research meetings

Today the astro seminar was given by Or Graur (CfA). He spoke about various discoveries he and collaborators have made in type Ia supernovae. For me, the most exciting was the discovery of atomic-mass-57 elements, which he can find by looking at the late-time decay: The same way we identify the mass-56 elements from timing supernovae decays at intermediate times, he finds the mass-57 elements. The difference is that they are at much later times (decay times in the years). He pointed out a caveat, which is that the late-time light curve can also be affected by unresolved light echoes. That's interesting and got me thinking (once again) about all the science related to light echoes that might be under the radar right now.

I hosted today my first-ever undergraduate research meeting. I got together undergraduates and pre-PhD students who are interested in doing research, and we discussed the Kepler and APOGEE data. My plan (and remember, I like to fail fast) is to have them work together on overlapping projects, so they all have coding partners but also their own projects. With regular meetings, it can fit into schedules and become something like a class!

2017-02-15

stellar twins and stellar age indicators

In the stars group meeting at CCA, Keith Hawkins (Columbia) blew us away with examples of stellar twins, identified with HARPS spectra. They were chosen to have identical derived spectroscopic parameters in three or four labels, but were amazingly identical at signal-to-noise of hundreds. He then showed us some he found in the APOGEE data, using very blunt tools to identify twins. This led to a long discussion of what we could do with twins, and things we expect to find in the data, especially regarding failures of spectroscopic twins to be identical in other respects, and failures of twins identified through means other than spectroscopic to be identical spectroscopically. Lots to do!

This was followed by Ruth Angus (Columbia) walking us through all the age-dating methods we have found for stars. The crowd was pretty unimpressed with many of our age indicators! But they agreed that we should take a self-calibration approach to assemble them and cross-calibrate them. It also interestingly connects to the twins discussion that preceded. Angus and I followed the meeting with a more detailed discussion about our plans, in part so that she can present them in a talk in her near future.

2017-02-14

abundance dimensionality, optimized photometric estimators

Kathryn Johnston (Columbia) organized a Local-Group meeting of locals, or a local group of Local Group researchers. There were various discussions of things going on in the neighborhood. Natalie Price-Jones (Toronto) started up a lot of discussion with her work on the dimensionality of chemical-abundance space, working purely with the APOGEE spectral data. That is, they are inferring the dimensionality without explicitly measuring chemical abundances or interpreting the spectra at all. Much of the questioning centered on how they know that the diversity they see is purely or primarily chemical rather than, say, instrumental or stellar nuisances.

At lunch time there were amusing things said at the Columbia Astro Dept Pizza Lunch. One was a very nice presentation by Benjamin Pope (Oxford) about how to do precise photometry of saturated stars in the Kepler data. He has developed a method that fully scoops me in one of my unfinished projects: The OWL, in which the pixel weights used in his soft-aperture aperture photometry are found through the optimization of a (very clever, in Pope's case) convex objective function. After the Lunch, we discussed a huge space of generalizations, some in the direction of more complex (but still convex) objectives, and others in the direction of train-and-test to ameliorate over-fitting.