2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!

2017-04-11

self-calibration

I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!

2017-04-10

causal photometry

Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.

I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.

2017-04-07

searches for cosmological estimators

I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.

In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.

2017-04-06

direct detection of the cosmic neutrino background

Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:

Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!

Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.

One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).

2017-04-05

a stellar stream with only two stars? And etc

In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).

Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.

Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.

In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.

2017-04-04

words on a plane

On the plane home, I worked on my similarity-of-vectors (or stellar twins) document. I got it to the first-draft stage.

2017-04-03

how to add and how to subtract

My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)

On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)

2017-04-02

John Bahcall (and etc)

I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.

Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.

The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!

(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.

2017-03-31

the future of astrophysical data analysis

Dan Foreman-Mackey (UW) crashed NYC today, surprising me, and disrupting my schedule. We began our day by arguing about the future of hierarchical modeling. His position is (sort-of) that the future is not hierarchical Bayes as it is currently done, but rather that we will be doing things that are much more ABC-like. That is, astrophysics theory is (generally) computational or simulation-based, and the data space is far too large for us to understand densities or probabilities in the data space. So we need ways to responsibly use simulations in inference. Right now the leading method is what is called (dumbly) ABC. I asked: So, are we going to do CMB component separation at the pixel level with ABC? This seems impossible at the present day, and DFM's pointed out that ABC is best when precision requirements are low. When precision requirements are high, there aren't really options that have computer simulations inside the inference loop!

Many other things happened today. I spent time with Lauren Anderson (Flatiron), validating and inspecting the output of our parallax inferences. I spent a phone call with Fed Bianco (NYU) talking about how to adapt Gaussian Processes to make models of supernovae light curves. And Foreman-Mackey and I spent time talking about linear algebra, and also this blog post, with which we more-or-less agree (though perhaps it doesn't quite capture all the elements that contribute (positively and negatively) to the LTFDFCF of astronomers!).

2017-03-30

linear algebra; huge models

I had a long conversation today with Justin Alsing (Flatiron) about hierarchical Bayesian inference, which he is thinking about (and doing) in various cosmological contexts. He is thinking about inferring a density field that simultaneously models the galaxy structures and the weak lensing, to do a next-generation (and statistically sound) lensing tomography. His projects are amazingly sophisticated, and he is not afraid of big models. We also talked about using machine learning to do emulation of expensive simulations, initial-conditions reconstruction in cosmology, and moving-object detection in imaging.

I also spent time playing with my linear algebra expressions for my document on finding identical stars. Some of the matrices in play are low-rank; so I ought to be able to either simplify my expressions or else simplify the number of computational steps. Learning about my limitations, mathematically! One thing I re-discovered today is how useful it is to use the Kusse & Westwig notation and conceptual framework for thinking about hermitian matrices and linear algebra.

2017-03-29

ergodic stars; blinding and pre-registration

In the Stars group meeting, Nathan Leigh (AMNH) and Nick Stone (Columbia) spoke about 4-body scattering or 2-on-2 binary-binary interactions. These can lead to 3-1, 2-2, and 2-1-1 outcomes, with the latter being most common. They are using a fascinating and beautiful ergodic-hypothesis-backed method (constrained by conservation laws) to solve for the statistical input–output relations quasi-analytically. This is a beautiful idea and makes predictions about star-system evolution in the Galaxy.

In the Cosmology group meeting, Alex Malz (NYU) led a long and wide-ranging discussion of blinding (making statistical results more reliable by pre-registering code or sequestering data). The range of views in the room was large, but all agreed that you need to be able to do exploratory data analysis and also protect against investigator bias. My position is we better be doing some form of blinding for our most important questions, but I also think that we need to construct these methods to permit people to play with the data and permit public data releases that are uncensored and unmodified. One theme which came up is that astronomy's great openness is a huge asset here. Fundamentally we are protected (in part) by the availability of the data to re-analysis.

2017-03-28

Local Group on the Local Group

Today, Kathryn Johnston (Columbia) organized a “Local Group on the Local Group” meeting at Columbia. Here are some highlights:

Lauren Anderson (Flatiron) gave an update on her data-driven model of the color–magnitude diagram of stars. This led to a conversation about which features in her deconvolved CMD are real? And are there too many red-clump stars given the total catalog size?

Steven Mohammed (Columbia) showed our GALEX Galactic-Plane survey data on the Gaia TGAS stars. The GALEX colors look very sensitive to metallicity and possibly other abundances. The audience suggested that we look at the full dependences on metallicity and temperature and surface gravity to see if we can break all degeneracies. This led to more discussion of the use of the Red Clump stars for Galactic science.

Adrian Price-Whelan (Columbia) presented a puzzle about the Galactic globular cluster system, which he has been thinking about. Are the distant clusters accreted? The in-situ formation hypothesis is unpalatable (it had to be many clusters at early times; should be many thin streams); the accreted hypothesis over-produces the smooth component of the stellar halo (unless dwarf galaxies had far more GCs per unit stellar mass in the past). These problems can be resolved, but only with strong predictions.

Yong Zheng (Columbia) spoke about the gaseous Magellanic stream and associated (or plausibly associated) high-velocity clouds. Many of the challenges in interpretation connect to the problem that we don't know where the gas is along the line of sight. She showed really nice data on something called Wright’s Cloud. For this huge structure—and for the stream as a whole—there is little to no associated stellar component.

Nicola Amorisco (Harvard) Showed theoretical simulations of the accreted part of the MW (and MW-like-galaxy) halo, with the goal of finding stellar-halo observables that strongly co-vary with the assembly history of the dark-matter halo. Both theory and observations suggest large scatter in halo properties at Milky-Way-like masses, and much less scatter at higher masses (because of central-limit-like considerations). His results are promising for understanding the MW assembly history.

Glennys Farrar (NYU) spoke about the MW magnetic field, using rotation measures and CMB to constrain the model. She showed UHECR deflections in the inferred magnetic field, and also discussed implications of her results for electron and cosmic-ray diffusion. There are also tantalizing implications for the synchrotron spectrum and CMB component separation. One interesting comment: If her results are right for the scale and amplitude of the field, there are serious questions about origin and generation; is it primordial or generated on scales much larger than the galaxy?