Abi Polin (Berkeley) came through NYU this week. Today she delivered a great seminar on explosions of white dwarfs. She is looking at different ignition mechanisms, and trying to predict the resulting supernovae spectra and light curves. This modeling requires a huge range of physics, including gastrophysics, nuclear reaction networks, and photospheres (both for absorption and emission lines). The current models have serious limitations (like one-d, which she intends to fix during her PhD), but they strongly suggest that type Ia supernovae (the ones that are created by white-dwarf explosions) do seem to come from a narrow mass range in white-dwarf mass. If you go too high in mass, you over-produce nickel. If you go too low in mass, you under-produce nickel and get way under-luminous. In addition to the NYU CCPP crew, Surabh Jha (Rutgers) and Armin Rest (STScI) were in the audience, so this talk was followed by a lively lunch! Jha suggested that the narrow mass range implied by the talk could also help with understanding the standard-candle-ness of these explosions.
I had the realization that I can reduce my concerns about radial-velocity fitting (given a spectrum) to the problem of centroiding a single spectral line, and then scale up using information theory. So there is a paper to write! I sketched an abstract.
In the morning, Andrei Mesinger (SNS) gave a talk about the epoch of reionization. He argued fairly convincingly that, between Planck, Lyman-alpha emission from very high-redshift quasars and galaxies, and the growth of dark-matter structure, the epoch of reionization is pretty well constrained now, around redshift of 7 to 8. The principal observation (from my perspective) is that the optical depth to the surface of last scattering is close to the minimum possible value (given what we know out to redshifts of 5 or 6). He discussed also what we will learn from 21-cm projects, and—like Colin Hill a few weeks ago—is looking for the right statistics. I really have to start a project that finds decisive (and symmetry-constrained) summary statistics, given simulations!
At Stars group meeting, Keith Hawkins (Columbia) summarized the Nice meeting on Gaia. Some Gaia Sprint and Camp Hogg results were highlighted there in Anthony Brown's talk, apparently. There were results on Gaia accuracy of interest to us (and testable by us), and also things about the velocity distribution in the Galaxy halo.
Tjitske Starkenburg (Flatiron) talked about counter-rotating components in disk galaxies: She would like to find observational signatures that can be found in both simulations and also the data. But she also wants to understand their origins in the simulations. Interestingly she finds many different detailed formation histories that can lead to counter-rotating components. That is consistent with their high frequency in the observed samples.
I finally got some writing done today, in the Anderson paper on the empirical, deconvolved color-magnitude diagram. We are very explicitly structuring the paper around the assumptions, and each of the assumptions has a name. This is part of my grand plan to develop a good, repeatable, useful, and informative structure for a data-analysis paper.
I missed a talk last week by Andrew Pontzen (UCL), so I found him today and discussed matters of common interest. It was a wide-ranging conversation but two highlight were the following: We discussed causality or causal explanations in a deterministic-simulation setting. How could it be said that “mergers cause star bursts”? If everything is deterministic, isn't it equally true that star bursts cause mergers? One question is the importance of time or time ordering (or really light-cone ordering). For the statisticians who think about causality this doesn't enter explicitly. I think that some causal statements in galaxy evolution are wrong on philosophical grounds but we decided that maybe there is a way to save causality provided that we always refer to the initial conditions (kinematic state) on a prior light cone. Oddly, in a deterministic universe, causal explanations are mixed up with free will and subjective knowledge questions.
Another thing we discussed is a very neat trick he figured out to reduce cosmic variance in simulations of the Universe: Whenever you simulate from some initial conditions, also simulate from the negative of those initial conditions (all phases rotated by 180 degrees, or all over-densities turned to under, or whatever). The average of these two simulations will cancel out some non-trivial terms in the cosmic variance!
The day ended with a long call with Megan Bedell (Chicago), going over my full list of noise sources in extreme precision radial-velocity data (think: finding and characterizing exoplanets). She confirmed everything in my list, added a few new things, and gave me keywords and references. I think a clear picture is emerging of how we should attack (what NASA engineers call) the tall poles. However, it is not clear that the picture will get set down on paper in time for the Exoplanet Research Program funding call!
Today not much! I had a valuable conversation with Trisha Hinners (NG Next) about machine-learning projects with the Kepler data, and I did some pen-and-paper writing and planning for my proposal on exoplanet-related extreme precision radial-velocity measurements.
Ruth Angus (Columbia) and I discussed the state of her hierarchical Bayesian model to self-calibrate a range of stellar age indicators. Bugs are fixed and it appears to be working. We discussed the structure of a Gibbs sampler for the problem. We reviewed work Angus and also Melissa Ness (MPIA) did at the 2016 NYC Gaia Sprint on vertical action dispersion as a function of stellar age. Beautiful results! We had an epiphany and decided that we have to publish these results, without waiting for the Bayesian inference to be complete. That is, we should publish a simple empirical paper based on TGAS, proposing the general point that vertical action provides a clock with very good properties: It is not precise, but it is potentially very accurate, because it is very agnostic about what kind of star it is is timing.
I had lunch with Jesse Muir (Michigan), and then she gave an informal seminar after lunch. She has been working on a number of things in cosmological measurement. One highlight is an investigation of the anomalies (strange statistical outliers or badly fit aspects) in the CMB: She has asked how they are related, and whether they are really independent. I discussed with her the possibility that we might be able to somehow lexicographically order all possible anomalies and then search for them in an ordered way, keeping track of all possible measurements and their outcomes, as a function of position in the ordering. The reason I am interested in this is because some of the anomalies are “odd enough” that I would expect them to come up pretty late in any ordering. That makes them not-that-anomalous! This somehow connects to p-values and p-hacking and so on. I also discussed with Muir the possibility of looking for anomalies in the large-scale structure. This should be an even richer playground.
In Stars group meeting, Ruth Angus (Columbia) showed her catalog of rotation periods in the Kepler and K2 fields. She has a huge number! We discussed visualizations of these that would be convincing and also possibly create new scientific leads.
Also in Stars group meeting, Arash Bahramian (MSU) spoke about black holes in globular clusters. He discussed how they use simultaneous radio and X-ray observations to separate the BHs from neutron stars: Radio reveals jet energy and X-ray reveals accretion energy, which (empirically) are different for BHs and NSs. However, in terms of making a data-driven model, the only situations in which you are confident that something is a NS is when you see X-ray bursts (because: surface effect) and the only situations in which you are confident that something is a BH is when you can see a dynamical mass substantially greater than 1.4 Solar masses (because: equation of state). He highlighted some oddities around the cluster Terzan 5, which is the globular cluster with the largest number of X-ray sources, and also an extremely high density and inferred stellar collision rate. This was followed by much discussion of relationships between collision rate and other cluster properties, and also some discussion of some individual x-ray sources.
[In non-research news: Today I became an employee of the Flatiron Institute, as a new group leader within the CCA! Prior to today I was only in a consulting role.]
In the CCPP Brown-Bag talk, Duccio Pappadopulo (NYU) gave a very nice and intuitive introduction to the strong CP problem (although he really presented it as the strong T problem!). He discussed the motivation for the QCD axion and then experimental bounds on it. He mentioned at the end his own work that permits the QCD axion to have much stronger couplings to photons, and therefore be much more readily detected in the laboratory. He discussed an important kind of experiment that I had not heard about previously: The helioscope, which is an x-ray telescope in a strong magnetic field, looking at the Sun, but inside a shielded building (search "axion helioscope"). That is, the experiment asks the question: Can we see through the walls? This tests the coupling of the QCD sector and the photon to the axion, because (QCD) axions are created in the Sun, and some will convert (using the magnetic field to obtain a free photon) into x-ray photons at the helioscope. Crazy, but seriously these are real experiments! I love my job.
Today it was a pleasure to participate in the PhD defense of Yuqian Liu (NYU), who has exploited the world's largest dataset on stripped supernovae, part of the huge spectral collection of Maryam Modjaz's group at NYU. She pioneered various data-driven methods for the spectral analysis. One is to create a data-driven or empirical noise model using filtering in the Fourier domain. Another is to fit shifted and broadened lines using empirical spectra and bayesian inference. She uses these methods to automatically make uniform measurements of spectral features from very heterogeneous data from multiple sources of different levels of reliability. Her results rule out various (one might say: All!) physical models for these supernovae. Her results are all available open-source, and she has pushed her results into SNID, which is the leading software supernova classifier. Congratulations Dr Liu!
Because of the availability of Dan Huber (Hawaii) in the city today, we moved Stars group meeting to Thursday! He didn't disappoint, telling us about asteroseismology projects in the Kepler and K2 data. He likes to emphasize that the >20,000 stars in the Kepler field that have measured nu-max and delta-nu have—every one of them—been looked at by (human) eye. That is, there is no fully safe automated method for measuring these. My loyal reader knows that this is a constant subject of conversation in group meeting, and has been for years now. We discussed developing better methods than what is done now.
In my mind, this is all about constructing estimators, which is something I know almost nothing about. I proposed to Stephen Feeney (Flatiron) that we simulate some data and play around with it. Sometimes good estimators can be inspired by fully Bayesian procedures. We could also go fully Bayes on this problem! We have the technology (now, with new Gaussian-Process stuff). But we anticipate serious slowness: We need methods that will work for TESS, which means they have to run on hundreds of thousands to millions of light curves.
In the afternoon, Chang Hoon Hahn (NYU) defended his PhD, which is on methods for making large-scale structure measurements. We have joked for many years that my cosmology group meeting is always and only about fiber collisions. (Fiber collisions: Hardware-induced configurational constraints on taking spectra or getting redshifts of galaxies that are close to one another on the sky.) This has usually been Hahn's fault, and he didn't let us down in his defense. Fiber collisions is a problem that seems like it should be easy and really, really is not. It is an easy problem to solve if you have an accurate cosmological model at small scales! But the whole point is that we don't. And in the future, when surveys use extremely complicated fiber positioners (instead of just drilling holes), the fiber-collision problem could become very severe. Very. As in: It might require knowing (accurately) very high-point functions of the galaxy distribution. More on this at some point: This problem has legs. But, in the meantime: Congratulations Dr Hahn!
In the early morning, Ana Bonaca (Harvard) and I discussed our information-theory project on cold stellar streams. We talked about generalizing our likelihood model or form, and what that would mean for the lower bound (on the variance of any unbiased estimator; the Cramér–Rao bound). I have homework.
At the Flatiron, instead of group meeting (which we moved to tomorrow), we had a meeting on the strange pair of stars that Semyeong Oh (Princeton) and collaborators have found, with very odd chemical differences. We worked through the figures for the paper, and all the alternative explanations for their formation, sharpening up the arguments. In a clever move, David Spergel (Flatiron) named them Kronos and Krios. More on why that, soon.
In the afternoon, in cosmology group meeting, Boris Leistedt (NYU) talked about his grand photometric-redshift plan, in which the templates and the redshifts are all estimated together in a beautiful hierarchical model. He plans to get photometric redshifts with no training redshifts whatsoever, and also no use of pre-set or known spectral templates (though he will compose the data-driven templates out of sensible spectral components). There was much discussion of the structure of the graphical model (in particular about selection effects). There was also discussion about doing low-level integrals fast or analytically.
In principle, writing a funding proposal is supposed to give you an opportunity to reflect on your research program, think about different directions, and get new insights about projects not yet started. In practice it is a time of copious writing and anxiety, coupled with a lack of sleep! However, I have to admit that today my experience was the former: I figured out (in preparing my Exoplanet Research Program proposal for NASA) that I have been missing some very low-hanging fruit in my thinking about the the error budget for extreme precision radial-velocity experiments:
RVs are obtained (usually) by cross-correlations, and cross-correlations only come close to saturating the Cramér–Rao bound when the template spectrum is extremely similar to the true spectrum. That just isn't even close to true for most pipelines. Could this be a big term in the error budget? Maybe not, but it has the great property that I can compute it. That's unlike most of the other terms in the error budget! I had a call with Megan Bedell (Chicago) at the end of the day to discuss the details of this. (This also relates to things I am doing with Jason Cao (NYU).)
In other news, I spent time reading about linear algebra, (oddly) to brush up on some notational things I have been kicking around. I read about tensors in Kusse and Westwig and, in the end, I was a bit disappointed: They never use the transpose operator on vectors, which I think is a mistake. However, I did finally (duh) understand the difference between contravariant and covariant tensor components, and why I have been able to do non-orthonormal geometry (my loyal reader knows that I think of statistics as a sub-field of geometry) for years without ever worrying about this issue.
I gave the CCPP Brown-Bag talk today, about how the Gaia mission works, according to my own potted story. I focused on the beautiful hardware design and the self-calibration.
Before that, Cato Sanford (NYU) defended his PhD, about model non-equilibrium systems in which there are swimmers (think: cells) in a homogenous fluid. He used a very simple Gaussian Process as the motive force for each swimmer, and then asked things like: Is there a pressure force on a container wall? Are there currents when the force landscape is non-trivial? And so on. His talk was a bit bio-stat-mech for my astrophysical brain, but I was stoked with the results, and I feel like the things we have done with Gaussian Processes might lead to intuitions in these crazy situations. The nice thing is that if you go from Brownian Motion to a GP-regulated walk, you automatically go out of equilibrium!
I violated house rules today and spent a Saturday continuing work from yesterday on the planning and organization of the AS4 proposal. We slowly walked through the whole proposal outline, assigning responsibilities for each section. We then walked through again, designing figures that need to be made, and assigning responsibilities for those too. It took all day! But we have a great plan for a great proposal. I'm very lucky to have this impressive set of colleagues.
Today was the first day of the AS-4 (After-Sloan-4) proposal-writing workshop, in which we started a sprint towards a large proposal for the Sloan Foundation. Very intelligently, Juna Kollmeier (OCIW) and Hans-Walter Rix (MPIA) started the meeting by having every participant give a long introduction, in which they not only said who they are and what they are interested in, but they also said what they thought the biggest challenges are in making this project happen. This took several hours, and got a lot of the big issues onto the table.
For me, the highlights of the day were presentations by Rick Pogge (OSU) and Niv Drory (Texas) about the hardware work that needs to happen. Pogge talked about the fiber positioning system, that will include robots, and a corrector, and a [censored] of a lot of sophisticated software (yes, I love this). It will reconfigure fast, to permit millions (something like 25 million) exposures (in five years) with short exposure times. Pogge really convinced me of the feasibility of what we are planning on doing, and delivered a realistic (but aggressive) timeline and budget.
Drory talked about the Local Volume Mapper, which mates a fiber-based IFU to a range of telescopes with different focal lengths (but same f-ratio) to make 3-d data cubes at different scales for different objects and different scientific objectives. It is truly a genius idea (in part because it is so simple). He showed us that they are really, really good at making close-packed fiber bundles, something they learned how to do with MaNGA.
It was a great day of serious argument, brutally honest discussion of trade-offs, and task lists for a hard proposal-writing job ahead.
Both Flatiron group meetings were great today. In the first, Nathan Leigh (AMNH) Spoke about collisions of star systems (meaning 2+1 interactions, 2+2, 2+3, and 3+3), using collisionless dynamics and the sticky star approximation (to assess collisions). He finds a simple scaling of collision probabilities in terms of combinatorics; that is, the randomness or chaos is efficient, or more efficient than you might think. The crowd had many questions about scattering in stellar systems and equipartition.
This led to a wider discussion of dynamical scattering. We asked the question: Can we learn about dynamical heating in stellar systems by looking at residual exoplanet populations (for example, if the heating is by close encounters by stars, systems should be truncated)? We concluded that wide separation binaries are probably better tracers from the perspective that they are easier to see. Then we asked: Can the Sun's own Oort cloud be used to measure of star-star interactions? And: Are there interstellar comets? David Spergel (Flatiron) pointed out the (surprising, to me) fact that there are no comets on obviously hyperbolic orbits.
Raja Guhakathurta (UCSC) is in town; he showed an amazing video zooming in to a tiny patch of Andromeda’s disk. He discussed Julianne Dalcanton’s dust results in M31 (on which I am a co-author). He then showed us detailed velocity measurements he has made for 13,000 (!) stars in the M31 disk. He finds the velocity dispersion of the disk grows with age, and grows faster and to larger values than in the Milky-Way disk. That led to more lunch-time speculation.
In the cosmology meeting, Shirley Ho (CMU) spoke about large-scale structure and machine learning. She asked the question: Can we use machine learning to compare simulations to data? In order to address this, she is doing a toy project: Compare simulations to simulations. Finds that a good conv-net does as well as the traditional power-spectrum analysis. This led to some productive discussion of where machine learning is most valuable in cosmology. Ben Wandelt (Paris) hypothesized that a machine-learning emulator can’t beat an n-body simulation. I disagreed (though on weak grounds)! We proposed that we set up a challenge of some kind, very well specified.
Ben Wandelt then spoke about linear inverse problems, on which he is doing very creative and promising work. He classified foreground approaches (for LSS and CMB) into Avoid or Adapt or Attack. On Avoid: He is using a low-rank covariance constraint to find foregrounds (This capitalizes on smooth wavelength (frequency) dependences, but reduces detailed assumptions). He showed that this separates signal and foreground—by the signal being high-rank and CDM-like (isotropic, homogeneous, etc), while the foreground is low rank (smooth in wavelength space). He then switched gears and showed us an amazingly high signal-to-noise void–galaxy cross-correlation function. We discussed how the selection affects the result. The cross-correlation is strongly negative at small separations and shows an obvious Alcock–Paczynski effect. David Spergel asked: Since this is an observation of “empty space”, does it somehow falsify modified GR or radical particle things?
Today Geoff Ryan (NYU) defended his PhD. I wrote a few things about his work here last week and he did not disappoint in the defense. The key idea I take from his work is: In an axisymmetric system (axisymmetric matter distribution and axisymmetric force law), material will not accrete without viscosity; it will settle into an incredibly long-lived disk (like Saturn's rings!). This problem has been solved by adding viscosity (artificially, but we do expect effective sub-grid viscosity from turbulence and magnetic fields), but less has been done about non-axisymmetry. Ryan shows that in the case of a binary system (this generates the non-axisymmetry), accretion can be driven without any viscosity. That's important and deep. He also talked about numerics, and also about GRB afterglows. It was a great event and we will be sad to see him go.
I had a valuable chat in the morning with Adrian Price-Whelan (Princeton) about some hypothesis testing, for stellar pairs. The hypotheses are: unbound and unrelated field stars, co-moving but unbound, and comoving because bound. We discussed this problem as a hypothesis test, and also as a parameter estimation (estimating binding energy and velocity difference). My position (that my loyal reader knows well) is that you should never do a hypothesis test when you can do a parameter estimation.
A Bayesian hypothesis test involves computing fully marginalized likelihoods (FMLs). A parameter estimation involves computing partially marginalized posteriors. When I present this difference to Dustin Lang (Toronto), he tends to say “how can marginalizing out all but one of your parameters be so much easier than marginalizing out all your parameters?”. Good question! I think the answer has to do with the difference between estimating densities (probability densities that integrate to unity) and estimating absolute probabilities (numbers that sum to unity). But I can't quite get the argument right.
In my mind, this is connected to an observation I have seen over at Andrew Gelman's blog more than once: When predicting the outcome of a sporting event, it is much better to predict a pdf over final scores than to predict the win/loss probability. This is absolutely my experience (context: horse racing).
Eliot Quataert (Berkeley) gave the astrophysics seminar today. He spoke about the last years-to-days in the lifetime of a massive star. He is interested in explaining the empirical evidence that suggests that many of these stars cough out significant mass ejection events in the last years of their lives. He has mechanisms that involve convection in the core driving gravity (not gravitational) waves in the outer parts that break at the edge of the star. His talk touched on many fundamental ideas in astrophysics, including the conditions under which an object can violate the Eddington luminosity. For mass-loss driven (effectively) by excess luminosity, you have to both exceed (some form of) the Eddington limit and deposit energy high enough up in the star's radius that there is enough total energy (luminosity times time) to unbind the outskirts. His talk also (inadvertently) touched on some points of impedance matching that I am interested in. Quataert's research style is something I admire immensely: Very simple, very fundamental arguments, backed up by very good analytic and computational work. The talk was a pleasure!
After the talk, I went to lunch with Daniela Huppenkothen (NYU), Jack Ireland (GSFC), and Andrew Inglis (GSFC). We spoke more about possible extensions of things they are working on in more Bayesian or more machine-learning directions. We also talked about the astrophysics Decadal process, and the impacts this has on astrophysics missions at NASA and projects at NSF, and comparisons to similar structures in the Solar world. Interestingly rich subject there.
In the morning, Jack Ireland (GSFC) and Andrew Inglis (GSFC) gave talks about data-intensive projects in Solar Physics. Ireland spoke about his Helioviewer project, which is a rich, multi-modal, interactive interface to the multi-channel, heterogeneous, imaging, time-stream, and event data on the Sun, coming from many different missions and facilities. It is like Google Earth for the Sun, but also with very deep links into the raw data. This project has made it very easy for scientists (and citizen scientists) from all backgrounds to interact with and obtain Solar data.
Inglis spoke about his AFINO project to characterize all Solar flares in terms of various time-series (Fourier) properties. He is interested in very similar questions for Solar flares that Huppenkothen (NYU) is interested in for neutron-star and black-hole transients. Some of the interaction during the talk was about different probabilistic approaches to power-spectrum questions in the time domain.
Over lunch I met with Ruth Angus (Columbia) to consult on her stellar chronometer projects. We discussed bringing in vertical action (yes, Galactic dynamics) as a stellar clock or age indicator. It is an odd indicator, because the vertical action (presumably) random-walks with time. This makes it a very low-precision clock! But it has many nice properties, like that it works for all classes of stars (possibly with subtleties), in our self-calibration context it connects age indicators of different types from different stars, and it is good at constraining old ages. We wrote some math and discussed further our MCMC sampling issues.
At Stars group meeting, Juna Kollmeier (OCIW) spoke about the plans for the successor project to SDSS-IV. It will be an all-sky spectroscopic survey, with 15 million spectroscopic visits, on 5-ish million targets. The cadence and plan are made possible by advances in robot fiber positioning, and The Cannon, which permits inferences about stars that scale well with decreasing signal-to-noise ratio. The survey will use the 2.5-m SDSS telescope in the North, and the 2.5-m du Pont in the South. Science goals include galactic archaeology, stellar systems (binaries, triples, and so on), evolved stars, origins of the elements, TESS scientific support and follow-up, and time-domain events. The audience had many questions about operations and goals, including the maturity of the science plan. The short story is that partners who buy in to the survey now will have a lot of influence over the targeting and scientific program.
Keith Hawkins (Columbia) showed his red-clump-star models built on TGAS and 2MASS and WISE and GALEX data. He finds an intrinsic scatter of about 0.17 magnitude (RMS) in many bands, and, when the scatter is larger, there are color trends that could be calibrated out. He also, incidentally, infers a dust reddening for every star. One nice result is that he finds a huge dependence of the GALEX photometry on metallicity, which has lots of possible scientific applications. The crowd discussed the extent to which theoretical ideas support the standard-ness of RC stars.
The research highlight of the day was a beautiful PhD defense by my student MJ Vakili (NYU). Vakili presented two big projects from his thesis: In one, he has developed fast mock-catalog software for understanding cosmic variance in large-scale structure surveys. In the other, he has built and run an inference method to learn the pixel-convolved point-spread function in a space-based imaging device.
In both cases, he has good evidence that his methods are the best in the world. (We intend to write up the latter in the Summer.) Vakili's thesis is amazingly broad, going from pixel-level image processing work that will serve weak-lensing and other precise imaging tasks, all the way up to new methods for using computational simulations to perform principled inferences with cosmological data sets. He was granted a PhD at the end of an excellent defense and a lively set of arguments in the seminar room and in committee. Thank you, MJ, for a great body of work, and a great contribution to my scientific life.
I talked to Ana Bonaca (Harvard) and Lauren Anderson (Flatiron) about their projects in the morning. With Bonaca I discussed the computation of numerically stable derivatives with respect to parameters. This is not a trivial problem when the model (of which you are taking derivatives) is itself a simulation or computation. With Anderson we edited and prioritized the to-do list to finish writing the first draft of her paper.
At lunch time, Geoff Ryan (NYU) gave the CCPP brown-bag talk, about accretion modes for binary black holes. Because the black holes orbit in a cavity in the circum-binary accretion disk, and then are fed by a stream (from the inner edge of the cavity), there is an unavoidable creation of shocks, either in transient activity or in steady state. He analyzed the steady-state solution, and finds that the shocks drive accretion. It is a beautiful model for accretion that does not depend in any way on any kind of artificial or sub-grid viscosity.
I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.
I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.
With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.
Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.
I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.
With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.
With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).
In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!
I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!
Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.
I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.
I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.
In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.
Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:
Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!
Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.
One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).
In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).
Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.
Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.
In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.
My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)
On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)
I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.
Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.
The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!
(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.
Dan Foreman-Mackey (UW) crashed NYC today, surprising me, and disrupting my schedule. We began our day by arguing about the future of hierarchical modeling. His position is (sort-of) that the future is not hierarchical Bayes as it is currently done, but rather that we will be doing things that are much more ABC-like. That is, astrophysics theory is (generally) computational or simulation-based, and the data space is far too large for us to understand densities or probabilities in the data space. So we need ways to responsibly use simulations in inference. Right now the leading method is what is called (dumbly) ABC. I asked: So, are we going to do CMB component separation at the pixel level with ABC? This seems impossible at the present day, and DFM's pointed out that ABC is best when precision requirements are low. When precision requirements are high, there aren't really options that have computer simulations inside the inference loop!
Many other things happened today. I spent time with Lauren Anderson (Flatiron), validating and inspecting the output of our parallax inferences. I spent a phone call with Fed Bianco (NYU) talking about how to adapt Gaussian Processes to make models of supernovae light curves. And Foreman-Mackey and I spent time talking about linear algebra, and also this blog post, with which we more-or-less agree (though perhaps it doesn't quite capture all the elements that contribute (positively and negatively) to the LTFDFCF of astronomers!).
I had a long conversation today with Justin Alsing (Flatiron) about hierarchical Bayesian inference, which he is thinking about (and doing) in various cosmological contexts. He is thinking about inferring a density field that simultaneously models the galaxy structures and the weak lensing, to do a next-generation (and statistically sound) lensing tomography. His projects are amazingly sophisticated, and he is not afraid of big models. We also talked about using machine learning to do emulation of expensive simulations, initial-conditions reconstruction in cosmology, and moving-object detection in imaging.
I also spent time playing with my linear algebra expressions for my document on finding identical stars. Some of the matrices in play are low-rank; so I ought to be able to either simplify my expressions or else simplify the number of computational steps. Learning about my limitations, mathematically! One thing I re-discovered today is how useful it is to use the Kusse & Westwig notation and conceptual framework for thinking about hermitian matrices and linear algebra.
In the Stars group meeting, Nathan Leigh (AMNH) and Nick Stone (Columbia) spoke about 4-body scattering or 2-on-2 binary-binary interactions. These can lead to 3-1, 2-2, and 2-1-1 outcomes, with the latter being most common. They are using a fascinating and beautiful ergodic-hypothesis-backed method (constrained by conservation laws) to solve for the statistical input–output relations quasi-analytically. This is a beautiful idea and makes predictions about star-system evolution in the Galaxy.
In the Cosmology group meeting, Alex Malz (NYU) led a long and wide-ranging discussion of blinding (making statistical results more reliable by pre-registering code or sequestering data). The range of views in the room was large, but all agreed that you need to be able to do exploratory data analysis and also protect against investigator bias. My position is we better be doing some form of blinding for our most important questions, but I also think that we need to construct these methods to permit people to play with the data and permit public data releases that are uncensored and unmodified. One theme which came up is that astronomy's great openness is a huge asset here. Fundamentally we are protected (in part) by the availability of the data to re-analysis.
Today, Kathryn Johnston (Columbia) organized a “Local Group on the Local Group” meeting at Columbia. Here are some highlights:
Lauren Anderson (Flatiron) gave an update on her data-driven model of the color–magnitude diagram of stars. This led to a conversation about which features in her deconvolved CMD are real? And are there too many red-clump stars given the total catalog size?
Steven Mohammed (Columbia) showed our GALEX Galactic-Plane survey data on the Gaia TGAS stars. The GALEX colors look very sensitive to metallicity and possibly other abundances. The audience suggested that we look at the full dependences on metallicity and temperature and surface gravity to see if we can break all degeneracies. This led to more discussion of the use of the Red Clump stars for Galactic science.
Adrian Price-Whelan (Columbia) presented a puzzle about the Galactic globular cluster system, which he has been thinking about. Are the distant clusters accreted? The in-situ formation hypothesis is unpalatable (it had to be many clusters at early times; should be many thin streams); the accreted hypothesis over-produces the smooth component of the stellar halo (unless dwarf galaxies had far more GCs per unit stellar mass in the past). These problems can be resolved, but only with strong predictions.
Yong Zheng (Columbia) spoke about the gaseous Magellanic stream and associated (or plausibly associated) high-velocity clouds. Many of the challenges in interpretation connect to the problem that we don't know where the gas is along the line of sight. She showed really nice data on something called Wright’s Cloud. For this huge structure—and for the stream as a whole—there is little to no associated stellar component.
Nicola Amorisco (Harvard) Showed theoretical simulations of the accreted part of the MW (and MW-like-galaxy) halo, with the goal of finding stellar-halo observables that strongly co-vary with the assembly history of the dark-matter halo. Both theory and observations suggest large scatter in halo properties at Milky-Way-like masses, and much less scatter at higher masses (because of central-limit-like considerations). His results are promising for understanding the MW assembly history.
Glennys Farrar (NYU) spoke about the MW magnetic field, using rotation measures and CMB to constrain the model. She showed UHECR deflections in the inferred magnetic field, and also discussed implications of her results for electron and cosmic-ray diffusion. There are also tantalizing implications for the synchrotron spectrum and CMB component separation. One interesting comment: If her results are right for the scale and amplitude of the field, there are serious questions about origin and generation; is it primordial or generated on scales much larger than the galaxy?
My research activity today was to re-write, from scratch (well, I really started yesterday) my document on how you tell whether two noisily measured objects are identical. This is an old and solved problem! But I am writing the answer in astronomer-friendly form, with a few astronomy-related twists. I have no idea whether this is a paper, a section of a paper, or something else. My re-write was caused by the algebra I learned from Leistedt, and the customers (so to speak) of the document are Rix, Ness, and Hawkins, all of whom are thinking about finding identical pairs of stars.
I had a great visit to the University of Toronto Department of Astronomy and Astrophysics (and Dunlap Institute) today. I had great conversations about scintillometry (new word?) and the future of likelihood functions and component separation in the CMB. I also discussed pairwise velocity differences in cosmology, and probabilistic supernova classification. There is lots going on. I gave my talk on The Cannon, in which I was perhaps way too pessimistic about chemical tagging!
Early in the day, I ate Toronto-style (no, not Montreal-style) bagels with Dustin Lang (Toronto) and discussed many of the things we like to discuss, like finding very faint outer-Solar-System objects in all the data Lang wrangles, like the differences between accuracy and precision, and even how to define accuracy in astrophysics, and like April Fools' papers, which have to meet four criteria:
- conceptually interesting inference
- extremely challenging computation
- no long-term scientific value to the specific results found
- non-irrelevant mention of April 1 in abstract
My one piece of research news today was an email exchange with Boris Leistedt (NYU) in which he completely took me to school on math with Gaussians. My intuition (expressed this week) that there was an easier way to do all the operations I was doing was right! But everything else I was doing was not wrong but wrong-headed. Anyways, this should simplify some things right away. The key observation is that a product of Gaussians can be transformed into another product of Gaussians, in another basis, trivially. More soon!
In Stars group meeting, Lauren Anderson (Flatiron) showed our toy example that demonstrates why our method for de-noising the Gaia TGAS data works. That led to some useful conversation that might help us explain our project better. I didn't take all the notes I should have! One idea that came up is that if there are two populations, one only seen at very low signal-to-noise, then that second population can easily get pulled in to the first. Another is the question of the circularity of the reasoning. Technically, our reasoning is circular, but it wouldn't be if we marginalized out the hyper-parameters (that is, the parameters of our color–magnitude diagram).Also in the Stars meeting, Ruth Angus (Columbia) suggested how we might responsibly look for the differences in exoplanet populations with stellar age. And Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) described their very successful observing run to follow up the comoving stellar pairs. Preliminary analyses suggest that many of the pairs (which we found only with transverse information) are truly comoving.
In Cosmology group meeting, Jeremy Tinker discussed the possibility of using halo-occupation-like approaches to determine how the globular cluster populations of galaxies form and evolve. This led to a complicated and long discussion, with many ideas and issues arising. I do think that various simple scenarios could be ruled out, making use of some kind of continuity argument (with sources and sinks, of course).
I spent some time hidden away working on multiplying and integrating Gaussians. I am doing lots of algebra, completing squares. I have the tiniest suspicion that there is an easier way, or that all of the math I am doing has a simple answer at the end, that I could have seen before starting?
First thing in the morning I met with Steven Mohammed (Columbia) and Dun Wang (NYU) to discuss GALEX calibration and imaging projects. Wang has a very clever astrometric calibration of the satellite, built by cross-correlating photons with the positions of known stars. This astrometric calibration depends on properties of the photons for complicated reasons that relate to the detector technology on board the spacecraft. Mohammed finds, in an end-to-end test of Wang's images, that there might be half-pixel issues in our calibration. We came up with methods for tracking that down.
Late in the day, I met with Ruth Angus (Columbia) to discuss the engineering in her project to combine all age information (and self-calibrate all methods). We discussed how to make a baby test where we can do the sampling with technology we are good at, before we write a brand-new Gibbs sampler from scratch. Why, you might ask, would any normal person write a Gibbs sampler from scratch when there are so many good packages out there? Because you always learn a lot by doing it! If our home-built Gibbs doesn't work well, we will adopt a package.
I spent time today writing in the method section of the Anderson et al paper. I realized in writing it that we have been thinking about our model of the color–magnitude diagram as being a prior on the distance or parallax. But it isn't really, it is a prior on the color and magnitude, which for a given noisy, observed star, becomes a prior on the parallax. We will compute these implicit priors explicitly (it is a different prior for every star) for our paper output. We have to describe this all patiently and well!
At some point during the day, Jo Bovy (Toronto) asked a very simple question about statistics: Why does re-sampling the data (given presumed-known Gaussian noise variances in the data space) and re-fitting deliver samples of the fit parameters that span the same uncertainty distribution as the likelihood function would imply? This is only true for linear fitting, of course, but why is it true (and no, I don't mean what is the mathematical formula!)? My view is that this is (sort-of) a coincidence rather than a result, especially since it (to my mind) confuses the likelihood and the posterior. But it is an oddly deep question.
Today my research time was spent writing in the paper by Lauren Anderson (Flatiron) about the TGAS color–magnitude diagram. I think of it as being a probabilistic inference in which we put a prior on stellar distances and then infer the distance. But that isn't correct! It is an inference in which we put a prior on the color–magnitude diagram, and then, given noisy color and (apparent) magnitude information, this turns into an (effective, implicit) prior on distance. This Duh! moment led to some changes to the method section!
The stars group meeting today wandered into dangerous territory, because it got me on my soap box! The points of discussion were: Are there biases in the Gaia TGAS parallaxes? and How could we use proper motions responsibly to constrain stellar parallaxes? Keith Hawkins (Columbia) is working a bit on the former, and I am thinking of writing something short with Boris Leistedt (NYU) on the latter.
The reason it got me on my soap-box is a huge set of issues about whether catalogs should deliver likelihood or posterior information. My view—and (I think) the view of the Gaia DPAC—is that the TGAS measurements and uncertainties are parameters of a parameterized model of the likelihood function. They are not parameters of a posterior, nor the output of any Bayesian inference. If they were outputs of a Bayesian inference, they could not be used in hierarchical models or other kinds of subsequent inferences without a factoring out of the Gaia-team prior.
This view (and this issue) has implications for what we are doing with our (Liestedt, Hawkins, Anderson) models of the color–magnitude diagram. If we output posterior information, we have to also output prior information for our stuff to be used by normals, down-stream. Even with such output, the results are hard to use correctly. We have various papers, but they are hard to read!
One comment is that, if the Gaia TGAS contains likelihood information, then the right way to consider its possible biases or systematic errors is to build a better model of the likelihood function, given their outputs. That is, the systematics should be created to be adjustments to the likelihood function, not posterior outputs, if at all possible.
Another comment is that negative parallaxes make sense for a likelihood function, but not (really) for a posterior pdf. Usually a sensible prior will rule out negative parallaxes! But a sensible likelihood function will permit them. The fact that the Gaia catalogs will have negative parallaxes is related to the fact that it is better to give likelihood information. This all has huge implications for people (like me, like Portillo at Harvard, like Lang at Toronto) who are thinking about making probabilistic catalogs. It's a big, subtle, and complex deal.
[Today was a NYC snow day, with schools and NYU closed, and Flatiron on a short day.] I made use of my incarceration at home writing in the nascent paper about the TGAS color–magnitude diagram with Lauren Anderson (Flatiron). And doing lots of other non-research things.
Lauren Anderson (Flatiron) and I met early to discuss a toy model that would elucidate our color–magnitude diagram model project. Context is: We want to write a section called “Why the heck does this work?” in our paper. We came up with a model so simple, I was able to implement it during the drinking of one coffee. It is, of course, a straight-line fit (with intrinsic width, then used to de-noise the data we started with).
Lauren Anderson (Flatiron) are going to sprint this week on her paper on the noise-deconvolved color–magnitude diagram from the overlap of Gaia TGAS, 2MASS, and the PanSTARRS 3-d dust map. We started the day by making a long to-do list for the week, that could end in submission of the paper. My first job is to write down the data model for the data release we will do with the paper.
At lunch time I got distracted by my project to find a better metric than chi-squared to determine whether two noisily-observed objects (think: stellar spectra or detailed stellar abundance vectors) are identical or indistinguishable, statistically. The math involved completing a huge square (in linear-algebra space) twice. Yes, twice. And then the result is—in a common limit—exactly chi-squared! So my intuition is justified, and I know where it will under-perform.
At the NYU Astro Seminar, Ana Bonaca (Harvard) gave a great talk, about trying to understand the dynamics and origin of the Milky Way halo. She has a plausible argument that the higher-metallicity halo stars are the halo stars that formed in situ and migrated out, while the lower-metallicity stars were accreted. If this holds up, I think it will probably test a lot of things about the Galaxy's formation, history, and dark-matter distribution. She also talked about stream fitting to see the dark-matter component.
On that note, we started a repo for a paper on the information theory of cold stellar streams. We re-scoped the paper around information rather than the LMC and other peculiarities of the Local Group. Very late in the day I drafted a title and abstract. This is how I start most projects: I need to be able to write a title and abstract to know that we have sufficient scope for a paper.
I discussed some more the Cramér-Rao bound (or Fisher-matrix) computations on cold stellar streams being performed by Ana Bonaca (Harvard). We discussed how things change as we increase the numbers of parameters, and designed some possible figures for a possible paper.
I had a long phone call with Andy Casey (Monash) about The Cannon, which is being run inside APOGEE2 to deliver parameters in a supplemental table in data release 14. We discussed issues of flagging stars that are far from the training set. This might get strange in high dimensions.
In further APOGEE2 and The Cannon news, I dropped an email on the mailing lists about the radial-velocity measurements that Jason Cao (NYU) has been making for me and Adrian Price-Whelan (Princeton). His RV values look much better than the pipeline defaults, which is perhaps not surprising: The pipeline uses some cross-correlation templates, while Cao uses a very high-quality synthetic spectrum from The Cannon. This email led to some useful discussion about other work that has been done along these lines within the survey.
At stars group meeting, David Spergel (Flatiron) was tasked with convincing us (and Price-Whelan and I are skeptics!) that the Milky Way really does have spiral arms. His best evidence came from infrared emission in the Galactic disk plane, but he brought together a lot of relevant evidence, and I am closer to being convinced than ever before. As my loyal reader knows, I think we ought to be able to see the arms in any (good) 3-d dust map. So, what gives? That got Boris Leistedt (NYU), Keith Hawkins (Columbia), and me thinking about whether we can do this now, with things we have in-hand.
Also at group meeting, Semyeong Oh (Princeton) showed a large group-of-groups she has found by linking together co-moving pairs into connected components by friends-of-friends. It is rotating with the disk but at a strange angle. Is it an accreted satellite? That explanation is unlikely, but if it turns out to be true, OMG. She is off to get spectroscopy next week, though John Brewer (Yale) pointed out that he might have some of the stars already in his survey.
Today was a cold-stream science day. Ana Bonaca (Harvard) computed derivatives today of stream properties with respect to a few gravitational-potential parameters, holding the present-day position and orientation of the stream fixed. This permits computation of the Cramér-Rao bound on any inference or estimate of those parameters. We sketched out some ideas about what a paper along these lines would look like. We can identify the most valuable streams, the streams most sensitive to particular potential parameters, the best combinations of streams to fit simultaneously, and the best new measurements to make of existing streams.
Separately from this, I had a phone conversation with Adrian Price-Whelan (Princeton) about the point of doing stream-fitting. It is clear (from Bonaca's work) that fitting streams in toy potentials is giving us way-under-estimated error bars. This means that we have to add a lot more potential flexibility to get more accurate results. We debated the value of things like basis-function expansions, given that these are still in the regime of toy (but highly parameterized toy) models. We are currently agnostic about whether stream fitting is really going to reveal the detailed properties of the Milky Way's dark-matter halo. That is, for example, the properties that might lead to changes in what we think is the dark-matter particle.
Ana Bonaca (Harvard) showed up for a week of (cold) stellar streams inference. Our job is either to resurrect her project to fit multiple streams simultaneously, or else choose a smaller project to hack on quickly. One thing we have been discussing by email is the influence of the LMC (and SMC and M31 and so on) on the streams. Will it be degenerate with halo quadrupole or other parameters? We discussed how we might answer this question without doing full probabilistic inferences: In principle we only need to take some derivatives. This is possible, because Bonaca's generative stream model is fast. We discussed the scope of a minimum-scope paper that looks at these things, and Bonaca started computing derivatives.
Lauren Anderson (Flatiron) and I looked at her dust estimates for the stars in Gaia DR1 TGAS. She is building a model of the color–magnitude diagram with an iterative dust optimization: At zeroth iteration, the distances are (generally) over-estimated; we dust-correct, fit the CMD, and re-estimate distances. Then we re-estimate dust corrections, and do it again. The dust corrections oscillate between over- and under-corrections as the distances oscillate between over- and under-estimates. But it does seem to converge!
I met with Keith Hawkins (Columbia) in the morning, to discuss how to find stellar pairs in spectroscopy. I fundamentally advocated chi-squared difference, but with some modifications, like masking things we don't care about, removing trends on length-scales (think: continuum) that we don't care about, and so on. I noted that there are things to do that are somewhat better than chi-squared difference, that relate to either hypothesis testing or else parameter estimation. I promised him a note about this, and I also owe the same to Melissa Ness (MPIA), who has similar issues but in chemical-abundance (rather than purely spectral) space. Late in the day I worked on this problem over a beer. I think there is a very nice solution, but it involves (as so many things like this do) a non-trivial completion of a square.
In the afternoon, I met with my undergrad-and-masters research group. Everyone is learning how to install software, and how to plot spectra, light curves, and rectangular data. We talked about projects with the Boyajian Star, and also with exoplanets in 1:1 resonances (!).
The research highlight of my day was a trip to D. E. Shaw, to give an academic seminar (of all things) on extra-solar planet research. I was told that the audience would be very mathematically able and familiar with physics and engineering, and it was! I talked about the stationary and non-stationary Gaussian Processes we use to model stellar (stationary) and spacecraft (non-stationary) variability, how we detect exoplanet signals by brute-force search, and how we build and evaluate hierarchical models to learn the full population of extra-solar planets, given noisy observations. The audience was interactive and the questions were on-point. Of course many of the things we do in astrophysics are not that different—from a data-analysis perspective—from things the hedge funds do in finance. I spent my time with the D. E. Shaw trying to understand the atmosphere in the firm. It seems very academic and research-based, and (unlike at many banks), the quantitative researchers run the show.
Today was group meetings day. In the Stars meeting, John Brewer (Yale) told us about fitting stellar spectra with temperature, gravity, and composition, epoch-by-epoch for a multi-epoch radial-velocity survey. He is trying to understand how consistent his fitting is, what degeneracies there are, and whether there are any changes in temperature or gravity that co-vary with radial-velocity jitter. No results yet, but we had suggestions for tests to do. His presentation reinforced my idea (with Megan Bedell) to beat spectral variations against asteroseismological oscillation phase.
In the Cosmology meeting, Peter Melchior (Princeton) told us about attempts to turn de-blending into a faster and better method that is appropriate for HSC and LSST-generation surveys. He blew us away with a tiny piece of deep HSC imaging, and then described a method for deblending that looks like non-negative matrix factorization, plus convex regularizations. He has done his research on the mathematics around convex regularizations, reminding me that we should do a more general workshop on these techniques. We discussed many things in the context of Melchior's project; one interesting point is that the deblending problem doesn't necessarily require good models of galaxies (Dustin Lang and I always think of it as a modeling problem); it just needs to deliver a good set of weights for dividing up photons.
Today I dropped in on Detecting the Unexpected in Baltimore, to provide a last-minute talk replacement. In the question period of my talk, Tom Loredo (Cornell) got us talking about precision vs accuracy. My position is a hard one: We never have ground truth about things like chemical abundances of stars; every chemical abundance is a latent variable; there is no external information we can use to determine whether our abundance measurements are really accurate. My view is that a model is accurate only inasmuch as it makes correct predictions about qualitatively different data. So we are left with only precision for many of our questions of greatest interest. More on this in some longer form, later.
Highlights (for me; very subjective) of the days' talks were stories about citizen science. Chris Lintott (Oxford) told us about tremendous lessons learned from years of Zooniverse, and the non-trivial connections between how you structure a project and how engaged users will become. He also talked about a long-term vision for partnering machine learning and human actors. He answered very thoughtfully a question about the ethical aspects of crowd-sourcing. Brooke Simmons (UCSD) showed us how easy it is to set up a crowd-sourcing project on Zooniverse; they have built an amazingly simple interface and toolkit. Steven Silverberg (Oklahoma) told us about Disk Detective and Julie Banfield (ANU) told us about Radio Galaxy Zoo. They both have amazing super-users, who have contributed to published papers. In the latter project, they have found (somewhat serendipitously) the largest radio galaxy ever found! One take-away from my perspective is that essentially all of the discoveries of the Unexpected have happened in the forums—in the deep social interaction parts of the citizen-science sites.
After a morning working on terminology and notation for the color–magnitude diagram model paper with Lauren Anderson (Flatiron), I went to two seminars. The first was Jeremy Tinker (NYU) talking about the relationship between galaxy stellar mass and dark-matter halo mass as revealed by fitting of number-count and clustering data in large-scale structure simulations. He finds that only models with extremely small scatter (less—maybe far less—than 0.18 dex) are consistent with the data, and that the result is borne out by follow-ups with galaxy–galaxy lensing and other tests. This is very hard to understand within any realistic model for how galaxies form, and constitutes a new puzzle for standard cosmology plus gastrophysics.
In the afternoon there was a very wide-ranging talk by Mark Drezde (JHU) on data-science methods for social science, intervention in health issues, and language encoding. He is interested in taking topic models and either deepening them (to make better features) or else enriching their probabilistic structure. It is all very promising, though these subjects are—despite their extreme mathematical sophistication—in their infancy.
[I have been on vacation for a week.]
All I have done in the last week is (fail to) keep up with email (apologies y'all) and write one paragraph per day in the nascent paper with Lauren Anderson (Flatiron) about our data-driven model of the color–magnitude diagram. The challenge is to figure out what to emphasize: the fact that we de-noise the parallaxes, or the fact that we can extend geometric parallaxes to more distant stars, or the fact that we don't need stellar models?
Today the astro seminar was given by Or Graur (CfA). He spoke about various discoveries he and collaborators have made in type Ia supernovae. For me, the most exciting was the discovery of atomic-mass-57 elements, which he can find by looking at the late-time decay: The same way we identify the mass-56 elements from timing supernovae decays at intermediate times, he finds the mass-57 elements. The difference is that they are at much later times (decay times in the years). He pointed out a caveat, which is that the late-time light curve can also be affected by unresolved light echoes. That's interesting and got me thinking (once again) about all the science related to light echoes that might be under the radar right now.
I hosted today my first-ever undergraduate research meeting. I got together undergraduates and pre-PhD students who are interested in doing research, and we discussed the Kepler and APOGEE data. My plan (and remember, I like to fail fast) is to have them work together on overlapping projects, so they all have coding partners but also their own projects. With regular meetings, it can fit into schedules and become something like a class!
In the stars group meeting at CCA, Keith Hawkins (Columbia) blew us away with examples of stellar twins, identified with HARPS spectra. They were chosen to have identical derived spectroscopic parameters in three or four labels, but were amazingly identical at signal-to-noise of hundreds. He then showed us some he found in the APOGEE data, using very blunt tools to identify twins. This led to a long discussion of what we could do with twins, and things we expect to find in the data, especially regarding failures of spectroscopic twins to be identical in other respects, and failures of twins identified through means other than spectroscopic to be identical spectroscopically. Lots to do!
This was followed by Ruth Angus (Columbia) walking us through all the age-dating methods we have found for stars. The crowd was pretty unimpressed with many of our age indicators! But they agreed that we should take a self-calibration approach to assemble them and cross-calibrate them. It also interestingly connects to the twins discussion that preceded. Angus and I followed the meeting with a more detailed discussion about our plans, in part so that she can present them in a talk in her near future.
Kathryn Johnston (Columbia) organized a Local-Group meeting of locals, or a local group of Local Group researchers. There were various discussions of things going on in the neighborhood. Natalie Price-Jones (Toronto) started up a lot of discussion with her work on the dimensionality of chemical-abundance space, working purely with the APOGEE spectral data. That is, they are inferring the dimensionality without explicitly measuring chemical abundances or interpreting the spectra at all. Much of the questioning centered on how they know that the diversity they see is purely or primarily chemical rather than, say, instrumental or stellar nuisances.
At lunch time there were amusing things said at the Columbia Astro Dept Pizza Lunch. One was a very nice presentation by Benjamin Pope (Oxford) about how to do precise photometry of saturated stars in the Kepler data. He has developed a method that fully scoops me in one of my unfinished projects: The OWL, in which the pixel weights used in his soft-aperture aperture photometry are found through the optimization of a (very clever, in Pope's case) convex objective function. After the Lunch, we discussed a huge space of generalizations, some in the direction of more complex (but still convex) objectives, and others in the direction of train-and-test to ameliorate over-fitting.
Benjaming Pope (Oxford) arrived in New York today for a few days of visit, to discuss projects of mutual interest, with the hope of starting collaborations that will continue in his (upcoming) postdoc years. One thing we discussed was the JWST Early Release Science proposal call. The idea is to ask for observations that would be immediately scientifically valuable, but also create good archival opportunities for other researchers, and also help the JWST community figure out what are the best ways to make best use of the spacecraft in its (necessarily) limited lifetime. I am kicking around four ideas, one of which is about photometric redshifts, one of which is about precise time-domain photometry, one of which is about exoplanet transit spectroscopy, and one of which is about crowded-field photometry. The challenge we face is: Although there is tons of time to write a proposal, letters of intent are required in just a few weeks!
Benoit Coté (Victoria & MSU) came to NYU for the day. He gave a great talk about nucleosynthetic models for the origin of the elements. He is building a full pipeline from raw nuclear physics through to cosmological simulations of structure formation, to get it all right. There were many interesting aspects to his talk and our discussions afterwards. One was about the i-process, intermediate between r and s. Another was about how r-process elements (like Eu) put very strong constraints on the rate at which stars form within their gas. Another was about how we have to combine nucleosynthetic chemistry observations with other kinds of observations (of, say, the PDMF, and neutron-star binaries, and so on) to really get a reliable and true picture of the nucleosynthetic story.
Late in the afternoon, I met with Ruth Angus (Columbia) to further discuss our project on cross-calibrating (or really, self-calibrating) all stellar age indicators. We wrote down some probability expressions, designed a rough design for the code, and discussed how we might structure a Gibbs sampler for this model, which is inherently hierarchical. We also drew a cool chalk-board graphical model (in this tweet), which has overlapping plates, which I am not sure is permitted in PGMs?
My writing today was in the introduction to the paper Lauren Anderson (Flatiron) and I are writing about the color-magnitude diagram and statistical shrinkage in the Gaia TGAS—2MASS overlap. My view is that the idea behind the project is the same as the fundamental idea behind the Gaia Mission: The astrometry data (the parallaxes) give distances to the nearby stars; these are used to calibrate spectrophotometric models, which deliver distances for the (far more numerous) distant stars. Our goal is to show that this can be done without any involvement of stellar physics or physical models of stellar structure, evolution, or photospheres.
At stars group meeting, run by Lauren Anderson (Flatiron), new graduate student Jason Cao (NYU) showed us his work on measuring radial velocities for individual-visit APOGEE spectra. He has a method for determining the radial velocity that does not involve interpolating either the data or the model. During his presentation, Jo Bovy (Toronto) pointed out that, actually, the APOGEE team appears to do an interpolation of the data after the one-d spectral extraction. That's unfortunate! But anyway, we have a method that doesn't involve any interpolation which could be used on a survey that doesn't ever do interpolation before or after extraction. And yes, you can extract a spectrum on any wavelength grid you like, from any two-d data you like, without doing interpolation! The group-meeting attendees had many constructive comments for Cao.
I spent the day at Princeton, hosted by Scott Tremaine (IAS). Tuesday lunch is still alive and well in Princeton, though I was shocked to find it happening in the Princeton Physics Department's Jadwin Hall. One beautiful result shown at the lunch was presented by Kento Masuda (Princeton), looking at hot exoplanets with eccentric outer companions. He has two examples that show dramatic transit timing and duration change events, presumably caused by a conjunction near the outer planet's periastron. The data are incredible and he generates a very informative (think: narrow) posterior on the outer planet's properties, despite the fact that the outer planet is not directly observed at all (and has a many-year period).
I spent most of my research time with Price-Whelan (Princeton) and Tremaine, discussing projects on the go. We spent a lot of time talking about whether it will be possible to learn fundamental things about the dark matter by building dynamical models of the stellar motions in the Milky Way. Tremaine came up with lots of reasons to be skeptical! However, if the dark matter doesn't annihilate (and even whether or not it is found in an underground lab), dynamics will be our only real tool. So I am confused. To me, it is much more interesting to model the dynamics of the Milky Way if it will tell us what the dark matter is than if it will tell us nothing more than some details about our contingent collapse and assembly history within a generic dark-matter scenario.
Getting even more philosophical, Tremaine and I discussed the question: What astronomy projects are purely descriptive of the "weather" of the Universe, and what projects get at fundamental physical processes? Even stronger: What astronomy projects might lead to changes to our beliefs about the fundamental physics itself? And how important is that, anyway? Revealing our prejudices, we both wanted to say that the most important areas of astronomy are those that might lead to changes in our beliefs about fundamental physics. But then we both wanted to say that exoplanet science is super-interesting! How to resolve this? Or is there a conflict?
The only research today was discussion of projects with Daniela Huppenkothen (NYU), Lauren Anderson (Flatiron), and Jo Bovy (Toronto). One subject of conversation was the need for selection functions in analyzing Gaia data, both now and in the future. Bovy is working on a selection function for Gaia DR1 TGAS and we discussed how we might generate a selection function for the final Gaia data release. I have a plan, but it involves making a simulated Gaia mission to get it started.
Today was the third and final day of The Galactic Renaissance. Rosie Wyse (JHU) and Branimir Sesar (MPIA) both showed evidence for vertical ripples going outwards in the Milky Way disk. These could plausibly be raised by an encounter with Sagittarius or something similar. However, Sesar argued that the amplitude is too large to be anything reasonable in the Local Group. That suggests that maybe the evidence isn't secure?
Raja GuhaThakurta (UCSC) mentioned the argument that the halo is worth observing because you can see the accretion history, at least in principle. There were talks after his by Sales (UCR), Lee (Vanderbilt), and Bonaca (CfA) on the observed and simulated properties of our halo.
Phil Hopkins (Caltech) and Yves Revaz (EPFL) gave impressive galaxy simulation results. Hopkins's renderings are just the bomb, and we discussed them in some detail afterwards. Hopkins claimed that low-mass galaxies (at least star-forming ones) are always so far out of steady-state, you can never measure their masses using virial or other steady-state indicators. He also brought up the point that the dust in the ISM has different dynamics than the molecular gas, and therefore there might be insane separation of material as stars form. I also discussed that with him afterwards.
Today was the second day of The Galactic Renaissance. Two scientific themes of the day were globular-cluster star abundance patterns, and stellar models that account for 3-d and non-thermal-equilibrium (NLTE) effects. On the former, it was even suggested by one speaker that the existence of chemical-abundance variations of certain kinds might be part of the definition of a globular cluster! There are some extreme cases, and various claims that the most extreme examples might be the stripped centers of ancient accreted galaxies!
On the stellar modeling front, there were impressive demonstrations from Frebel (MIT), Bergemann (MPIA), and Thygesen (Caltech) that improving the realism of the physical inputs to stellar models improves their precision and their accuracy. Thygesen did a very nice thing of using (relatively cheap) 1-D models to inform functional forms for interpolation across grid points of a (relatively expensive) 3-D model grid. That got me interested in thinking about physics-motivated or physics-constrained interpolation methods, which could have value in lots of domains.
In a session about Judy's scientific and intellectual life, Steve Shectman (OCIW) described what the world was like in 1967, when Judy Cohen (Caltech) started graduate school. It was a time of optimisim, disruption, and violence. This resonated with things I know about Cohen, because she and I used to discuss the historical context of her origins as an astronomer back when I was a graduate student.
Another highlight of the day was a discussion with Kim Venn (Victoria) and Matt Shetrone (Texas) about persistence effects that damage a significant fraction of spectra in a significant fraction of APOGEE exposures. We discussed the trade-offs between correction and avoidance, and what it might take to fix the problem.
Over dinner, I and others delivered tributes to Judy Cohen. She really has had an amazing scientific impact, and also been a wonderful person, and had a big influence on me. She also said nice things about me in her own speech!
Today was the first day of the meeting The Galactic Renaissance, a meeting in honor of Judy Cohen (Caltech), who was one of my (three) PhD advisors (with Blandford and Neugebauer). On the plane to the meeting I built a brand-new talk about data-driven models of stars, bringing in stuff we are doing in HARPS and Gaia and connecting it to what we are doing with The Cannon.
One highlight of the meeting was Steve Shechtman (OCIW) talking about a new infrared multi-object spectrograph he is designing for Magellan. He talked about some interesting instrument design considerations, which was fitting, because Judy Cohen built (with Bev Oke and a great team) the most highly used instrument on the Keck Telescopes (the LRIS spectrograph, which I used in my PhD work). One point is that all spectrographs are fundamentally trade-offs between spatial and spectral extent, because the total number of pixels is limited. He noted that the spectrograph cost and weight is a strong function of the diameter of the collimated beam, which is simultaneously obvious and non-trivial. Finally, he noted that putting an imaging mode into a multi-object spectrograph substantially increases the cost and complexity: It requires that there not be chromatic optics, which imagers hate but spectrographs don't mind at all!
Another highlight was a talk about Solar twins by Jorge Melendez (São Paulo). By using carefully chosen twins, he can measure abundances better than 0.01 dex. He showed some great data. But even more absurd he is looking at binary stars, both members of which are themselves solar twins! But then if that isn't absurd enough, he also has binary stars, both members of which are themselves solar twins, and one of which has an exoplanet! Awesome. He mentioned that [Y/Mg] is a (possibly complicated) age indicator, which is relevant to things Ruth Angus (Columbia) and I have been thinking about.
It was a low-research day! The most productive moment came early in the morning, when I had a great discussion with Boris Leistedt (NYU) and Adrian Price-Whelan (Princeton) about the structure of my group meetings at CCA. We need to change them; we aren't hearing enough from young people, and we aren't checking in enough on projects in progress. They agreed to lead a discussion of this in both group meetings tomorrow, and to make format changes, implemented immediately. I have to learn that failing to do things right is only bad if we don't learn from it and try to do better.
Right after this conversation, Price-Whelan and I got in a short discussion about making kinematic age estimates for stars, using widely-separated, nearly co-moving pairs. I hypothesized that for any real co-moving pair, the separation event (the spatial position and time at which they were last co-spatial), will be better estimated than either the velocity or the separation, given (say) Gaia data.
In the morning, I met with Ruth Angus (Columbia) to discuss the ages of stars. We brainstormed all possible age estimates for stars, and listed some limitations and epistemological properties. In addition to the usuals (rotation, activity, isochrone fitting, asteroseismology, and so on), we came up with some amusing options.
For example, the age of the Universe is a (crude, simple, very stringent) age estimate for every single star, no matter what. It is a very low-precision estimate, but it is unassailable (at the present day). Another odd one is the separation of comoving pairs. In prinicple every co-moving pair provides an age estimate given the relative velocity and relative position, with the proviso that the stars might not be co-eval. This is a good age estimate except when it isn't, and we only have probabilistic information about when it isn't.
We then wrote down the basic idea for a project to build up a hierarchical model of all stellar ages, where each star gets a latent true age, and every age indicator gets latent free parameters (if there any). Then we use stars that overlap multiple age indicators to simultaneously infer all free parameters and all ages. The hope—and this is a theme I would like to thread throughout all my research—is that many bad age indicators (and they are all bad for different reasons) will, when combined, produce precise age estimates nonetheless for many stars.
At lunch-time, Glennys Farrar (NYU) gave an energizing black-board talk about a dark-matter candidate that exists within QCD, made up of a highly symmetric state of six quarks. QCD is a brutal theory, so it is hard to compute the properties of this state, or its stability, but Farrar laid out some of the conditions under which it is a viable dark-matter candidate. It is very interesting phenomenologically if it exists, because it has a non-trivial cross-section for scattering off of atomic nuclei, and it could be involved in baryogenesis or the matter–anti-matter asymmetry.
[This is the 12th birthday of this blog, and something like the 2814th post. I remain astonished that anyone reads this blog; surely it qualifies as one of the least interesting sites on the internet.]
I spent today again inside NSF headquarters. It was a good day, because most of our session was pure unstructured discussion of the issues—not presentations from anyone—in open session. All of the AAAC sessions are completely open, with an agenda and a call-in number open to literally anyone on the planet. This openness was also part of our discussion, because we got some discussion in on the opaque process by which the Decadal Survey (which is so damned important) is executed and also staffed. As part of this I published the non-disclosure agreement that the National Academy of Sciences asks people to sign if they are going to participate. It is way too strong, I think.
We also talked about many other interesting priorities and issues for our report. One is that the America Competes Act explicitly refers to the astrophysics decadal process as an exemplar for research funding priority-setting in the US government. Another is that the freedom of scientists in government agencies to clearly and openly communicate without executive-branch interference is absolutely essential to everything we do. Another is that the current (formalized, open) discussion about CMB Stage-4 experiments is an absolutely great example of inter-agency and and inter-institutional and cross-rivalry cooperation that will lead to a very strong proposal for the agencies, the community, and the Decadal Survey.
One very important point, which also came up at #AAS229, is that if we are going to make good, specific, actionable recommendations to the Decadal Survey about the state of the profession, about software, or about funding structures, we need to gather data now. These data are hard to gather; there are design and execution issues all over; let's have that conversation right now.
Today was the first day of an Astronomy and Astrophysics Advisory Committee meeting at NSF headquarters in Washington, DC. We had presentations from the agencies for most of the day. Random things that I learned that interest me follow in this blog post. Our meetings are open, by the way.
NSF is trying to divest from facilities in ways that keep them running by other partners, so even though they may go public, they will at least stay part of the community. In particular, they are working to offload Aricebo to a combination of NASA and private partners.
NASA has taken its ATP theory call down to once every two years, but not reduced funding. They hope this will increase the amount of funding per submitted proposal, and the early data seems like it might. NASA and NSF have started a joint funding program called TCAN for computational methods in astrophysics. That might affect me! NASA re-balanced its fellowship postdocs, in response to concerns about pipeline, long-term trends in their own funding portfolio, and the rise in private fellowships. This is debatable and controversial, though they did not enter into this decision-making lightly. What is not controversial is that they have combined all the fellowships into a common application process, substantially reducing the workload on applicants and referees.
There is an extremely big and serious CMB S-4 process going on, in which many traditionally rivaling scientific groups are cooperating to find consensus around what to build or do next. That's very healthy for the field, I think, and will create a very strong set of ideas for the next Decadal Survey to discuss. Decadal is on the agenda for tomorrow!
Towards the end of the day, Paul Hertz (NASA) and I got into a fight about Deep Space Network. I fear that I might be wrong here; I can't really claim to understand that stuff better than Hertz!