2017-04-29

after-Sloan-4 proposal writing, day 2

I violated house rules today and spent a Saturday continuing work from yesterday on the planning and organization of the AS4 proposal. We slowly walked through the whole proposal outline, assigning responsibilities for each section. We then walked through again, designing figures that need to be made, and assigning responsibilities for those too. It took all day! But we have a great plan for a great proposal. I'm very lucky to have this impressive set of colleagues.

2017-04-28

after-Sloan-4 proposal writing, day 1

Today was the first day of the AS-4 (After-Sloan-4) proposal-writing workshop, in which we started a sprint towards a large proposal for the Sloan Foundation. Very intelligently, Juna Kollmeier (OCIW) and Hans-Walter Rix (MPIA) started the meeting by having every participant give a long introduction, in which they not only said who they are and what they are interested in, but they also said what they thought the biggest challenges are in making this project happen. This took several hours, and got a lot of the big issues onto the table.

For me, the highlights of the day were presentations by Rick Pogge (OSU) and Niv Drory (Texas) about the hardware work that needs to happen. Pogge talked about the fiber positioning system, that will include robots, and a corrector, and a [censored] of a lot of sophisticated software (yes, I love this). It will reconfigure fast, to permit millions (something like 25 million) exposures (in five years) with short exposure times. Pogge really convinced me of the feasibility of what we are planning on doing, and delivered a realistic (but aggressive) timeline and budget.

Drory talked about the Local Volume Mapper, which mates a fiber-based IFU to a range of telescopes with different focal lengths (but same f-ratio) to make 3-d data cubes at different scales for different objects and different scientific objectives. It is truly a genius idea (in part because it is so simple). He showed us that they are really, really good at making close-packed fiber bundles, something they learned how to do with MaNGA.

It was a great day of serious argument, brutally honest discussion of trade-offs, and task lists for a hard proposal-writing job ahead.

2017-04-26

void–galaxy cross-correlations, stellar system encounters

Both Flatiron group meetings were great today. In the first, Nathan Leigh (AMNH) Spoke about collisions of star systems (meaning 2+1 interactions, 2+2, 2+3, and 3+3), using collisionless dynamics and the sticky star approximation (to assess collisions). He finds a simple scaling of collision probabilities in terms of combinatorics; that is, the randomness or chaos is efficient, or more efficient than you might think. The crowd had many questions about scattering in stellar systems and equipartition.

This led to a wider discussion of dynamical scattering. We asked the question: Can we learn about dynamical heating in stellar systems by looking at residual exoplanet populations (for example, if the heating is by close encounters by stars, systems should be truncated)? We concluded that wide separation binaries are probably better tracers from the perspective that they are easier to see. Then we asked: Can the Sun's own Oort cloud be used to measure of star-star interactions? And: Are there interstellar comets? David Spergel (Flatiron) pointed out the (surprising, to me) fact that there are no comets on obviously hyperbolic orbits.

Raja Guhakathurta (UCSC) is in town; he showed an amazing video zooming in to a tiny patch of Andromeda’s disk. He discussed Julianne Dalcanton’s dust results in M31 (on which I am a co-author). He then showed us detailed velocity measurements he has made for 13,000 (!) stars in the M31 disk. He finds the velocity dispersion of the disk grows with age, and grows faster and to larger values than in the Milky-Way disk. That led to more lunch-time speculation.

In the cosmology meeting, Shirley Ho (CMU) spoke about large-scale structure and machine learning. She asked the question: Can we use machine learning to compare simulations to data? In order to address this, she is doing a toy project: Compare simulations to simulations. Finds that a good conv-net does as well as the traditional power-spectrum analysis. This led to some productive discussion of where machine learning is most valuable in cosmology. Ben Wandelt (Paris) hypothesized that a machine-learning emulator can’t beat an n-body simulation. I disagreed (though on weak grounds)! We proposed that we set up a challenge of some kind, very well specified.

Ben Wandelt then spoke about linear inverse problems, on which he is doing very creative and promising work. He classified foreground approaches (for LSS and CMB) into Avoid or Adapt or Attack. On Avoid: He is using a low-rank covariance constraint to find foregrounds (This capitalizes on smooth wavelength (frequency) dependences, but reduces detailed assumptions). He showed that this separates signal and foreground—by the signal being high-rank and CDM-like (isotropic, homogeneous, etc), while the foreground is low rank (smooth in wavelength space). He then switched gears and showed us an amazingly high signal-to-noise void–galaxy cross-correlation function. We discussed how the selection affects the result. The cross-correlation is strongly negative at small separations and shows an obvious Alcock–Paczynski effect. David Spergel asked: Since this is an observation of “empty space”, does it somehow falsify modified GR or radical particle things?

2017-04-25

Dr Geoff Ryan

Today Geoff Ryan (NYU) defended his PhD. I wrote a few things about his work here last week and he did not disappoint in the defense. The key idea I take from his work is: In an axisymmetric system (axisymmetric matter distribution and axisymmetric force law), material will not accrete without viscosity; it will settle into an incredibly long-lived disk (like Saturn's rings!). This problem has been solved by adding viscosity (artificially, but we do expect effective sub-grid viscosity from turbulence and magnetic fields), but less has been done about non-axisymmetry. Ryan shows that in the case of a binary system (this generates the non-axisymmetry), accretion can be driven without any viscosity. That's important and deep. He also talked about numerics, and also about GRB afterglows. It was a great event and we will be sad to see him go.

2017-04-24

hypothesis testing and marginalization

I had a valuable chat in the morning with Adrian Price-Whelan (Princeton) about some hypothesis testing, for stellar pairs. The hypotheses are: unbound and unrelated field stars, co-moving but unbound, and comoving because bound. We discussed this problem as a hypothesis test, and also as a parameter estimation (estimating binding energy and velocity difference). My position (that my loyal reader knows well) is that you should never do a hypothesis test when you can do a parameter estimation.

A Bayesian hypothesis test involves computing fully marginalized likelihoods (FMLs). A parameter estimation involves computing partially marginalized posteriors. When I present this difference to Dustin Lang (Toronto), he tends to say “how can marginalizing out all but one of your parameters be so much easier than marginalizing out all your parameters?”. Good question! I think the answer has to do with the difference between estimating densities (probability densities that integrate to unity) and estimating absolute probabilities (numbers that sum to unity). But I can't quite get the argument right.

In my mind, this is connected to an observation I have seen over at Andrew Gelman's blog more than once: When predicting the outcome of a sporting event, it is much better to predict a pdf over final scores than to predict the win/loss probability. This is absolutely my experience (context: horse racing).

2017-04-21

the last year of a giant star's life

Eliot Quataert (Berkeley) gave the astrophysics seminar today. He spoke about the last years-to-days in the lifetime of a massive star. He is interested in explaining the empirical evidence that suggests that many of these stars cough out significant mass ejection events in the last years of their lives. He has mechanisms that involve convection in the core driving gravity (not gravitational) waves in the outer parts that break at the edge of the star. His talk touched on many fundamental ideas in astrophysics, including the conditions under which an object can violate the Eddington luminosity. For mass-loss driven (effectively) by excess luminosity, you have to both exceed (some form of) the Eddington limit and deposit energy high enough up in the star's radius that there is enough total energy (luminosity times time) to unbind the outskirts. His talk also (inadvertently) touched on some points of impedance matching that I am interested in. Quataert's research style is something I admire immensely: Very simple, very fundamental arguments, backed up by very good analytic and computational work. The talk was a pleasure!

After the talk, I went to lunch with Daniela Huppenkothen (NYU), Jack Ireland (GSFC), and Andrew Inglis (GSFC). We spoke more about possible extensions of things they are working on in more Bayesian or more machine-learning directions. We also talked about the astrophysics Decadal process, and the impacts this has on astrophysics missions at NASA and projects at NSF, and comparisons to similar structures in the Solar world. Interestingly rich subject there.

2017-04-20

Solar data

In the morning, Jack Ireland (GSFC) and Andrew Inglis (GSFC) gave talks about data-intensive projects in Solar Physics. Ireland spoke about his Helioviewer project, which is a rich, multi-modal, interactive interface to the multi-channel, heterogeneous, imaging, time-stream, and event data on the Sun, coming from many different missions and facilities. It is like Google Earth for the Sun, but also with very deep links into the raw data. This project has made it very easy for scientists (and citizen scientists) from all backgrounds to interact with and obtain Solar data.

Inglis spoke about his AFINO project to characterize all Solar flares in terms of various time-series (Fourier) properties. He is interested in very similar questions for Solar flares that Huppenkothen (NYU) is interested in for neutron-star and black-hole transients. Some of the interaction during the talk was about different probabilistic approaches to power-spectrum questions in the time domain.

Over lunch I met with Ruth Angus (Columbia) to consult on her stellar chronometer projects. We discussed bringing in vertical action (yes, Galactic dynamics) as a stellar clock or age indicator. It is an odd indicator, because the vertical action (presumably) random-walks with time. This makes it a very low-precision clock! But it has many nice properties, like that it works for all classes of stars (possibly with subtleties), in our self-calibration context it connects age indicators of different types from different stars, and it is good at constraining old ages. We wrote some math and discussed further our MCMC sampling issues.

2017-04-19

after SDSS-IV; red-clump stars

At Stars group meeting, Juna Kollmeier (OCIW) spoke about the plans for the successor project to SDSS-IV. It will be an all-sky spectroscopic survey, with 15 million spectroscopic visits, on 5-ish million targets. The cadence and plan are made possible by advances in robot fiber positioning, and The Cannon, which permits inferences about stars that scale well with decreasing signal-to-noise ratio. The survey will use the 2.5-m SDSS telescope in the North, and the 2.5-m du Pont in the South. Science goals include galactic archaeology, stellar systems (binaries, triples, and so on), evolved stars, origins of the elements, TESS scientific support and follow-up, and time-domain events. The audience had many questions about operations and goals, including the maturity of the science plan. The short story is that partners who buy in to the survey now will have a lot of influence over the targeting and scientific program.

Keith Hawkins (Columbia) showed his red-clump-star models built on TGAS and 2MASS and WISE and GALEX data. He finds an intrinsic scatter of about 0.17 magnitude (RMS) in many bands, and, when the scatter is larger, there are color trends that could be calibrated out. He also, incidentally, infers a dust reddening for every star. One nice result is that he finds a huge dependence of the GALEX photometry on metallicity, which has lots of possible scientific applications. The crowd discussed the extent to which theoretical ideas support the standard-ness of RC stars.

2017-04-18

Dr Vakili

The research highlight of the day was a beautiful PhD defense by my student MJ Vakili (NYU). Vakili presented two big projects from his thesis: In one, he has developed fast mock-catalog software for understanding cosmic variance in large-scale structure surveys. In the other, he has built and run an inference method to learn the pixel-convolved point-spread function in a space-based imaging device.
In both cases, he has good evidence that his methods are the best in the world. (We intend to write up the latter in the Summer.) Vakili's thesis is amazingly broad, going from pixel-level image processing work that will serve weak-lensing and other precise imaging tasks, all the way up to new methods for using computational simulations to perform principled inferences with cosmological data sets. He was granted a PhD at the end of an excellent defense and a lively set of arguments in the seminar room and in committee. Thank you, MJ, for a great body of work, and a great contribution to my scientific life.

2017-04-17

accretion onto binary black holes

I talked to Ana Bonaca (Harvard) and Lauren Anderson (Flatiron) about their projects in the morning. With Bonaca I discussed the computation of numerically stable derivatives with respect to parameters. This is not a trivial problem when the model (of which you are taking derivatives) is itself a simulation or computation. With Anderson we edited and prioritized the to-do list to finish writing the first draft of her paper.

At lunch time, Geoff Ryan (NYU) gave the CCPP brown-bag talk, about accretion modes for binary black holes. Because the black holes orbit in a cavity in the circum-binary accretion disk, and then are fed by a stream (from the inner edge of the cavity), there is an unavoidable creation of shocks, either in transient activity or in steady state. He analyzed the steady-state solution, and finds that the shocks drive accretion. It is a beautiful model for accretion that does not depend in any way on any kind of artificial or sub-grid viscosity.

2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!

2017-04-11

self-calibration

I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!

2017-04-10

causal photometry

Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.

I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.

2017-04-07

searches for cosmological estimators

I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.

In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.

2017-04-06

direct detection of the cosmic neutrino background

Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:

Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!

Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.

One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).

2017-04-05

a stellar stream with only two stars? And etc

In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).

Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.

Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.

In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.

2017-04-04

words on a plane

On the plane home, I worked on my similarity-of-vectors (or stellar twins) document. I got it to the first-draft stage.

2017-04-03

how to add and how to subtract

My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)

On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)

2017-04-02

John Bahcall (and etc)

I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.

Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.

The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!

(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.