2017-04-14

writing

I worked on putting references into my similarity-of-objects document (how do you determine that two different objects are identical in their measurable properties>?), and tweaking the words, with the hope that I will have something postable to the arXiv soon.

2017-04-13

crazy space hardware

I spent today at JPL, where Leonidas Moustakas (JPL) set up for me a great schdule with various of the astronomers. I met the famous John Trauger (JPL), who was the PI on WFPC2 and deserves some share of the credit for repairing the Hubble Space Telescope. I discussed coronography with Trauger and various others. I learned about the need for coronographs to have two (not just one) deformable mirror to be properly adaptive. With Dimitri Mawet (Caltech) I discussed what kind of data set we would like to have in order to learn in a data-driven way to predictively adapt the deformable mirrors in a coronograph that is currently taking data.

With Eric Huff (JPL) I discussed the possibility of doing weak lensing without ever explicitly measuring any galaxies—that is, measuring shear in the pixels of the images of the field directly. I also discussed with him the (apparently insane but maybe not) idea of using the Sun itself as a gravitational lens, capable of imaging continents on a distant, rocky exoplanet. This requires getting a spacecraft out to some 550 AU, and then positioning it to km accuracy! Oh and then blocking out the light from the Sun.

Martin Elvis (CfA) gave a provocative talk today, about the future of NASA astrophysics in the context of commercial space, which might drive down prices on launch vehicles, and drive up the availability of heavy lift. A theme of his talk, and a theme of many of my conversations during the day, was just how long the time-scales are on NASA astrophysics missions, from proposal to launch. At some point missions might start to take longer than a career; that could be very bad (or at least very disruptive) for the field.

2017-04-12

ZTF; self-calibration; long-period planets

I spent today at Caltech, where I spoke about self-calibration. Prior to that I had many interesting conversations. From Anna Ho (Caltech) I learned that ZTF is going to image 15,000 square degrees per night. That is life-changing! I argued that they should position their fields to facilitate self-calibration, which might break some ideas they might have about image differencing.

With Nadia Blagorodnova (Caltech) I discussed calibration of the SED Machine, which is designed to do rapid low-resolution follow-up of ZTF and LSST events. They are using dome and twilight flats (something I said is a bad idea in my colloquium) and indeed they can see that they are deficient or inaccurate. We discussed how to take steps towards self-calibration.

With Heather Knutson (Caltech) I discussed long-period planets. She is following up (with radial velocity measurements) the discoveries that Foreman-Mackey and I (and others) made in the Kepler data. She doesn't clearly agree with our finding that there are something like 2 planets per star (!) at long periods, but of course her radial-velocity work has different sensitivity to planets. We discussed the possibility of using radial-velocity surveys to do planet populations work; she believes it is possible (something I have denied previously, on the grounds of unrecorded human decision-making in the observing strategies).

In my talk I made some fairly aggressive statements about Euclid's observing strategies and calibration. That got me some valuable feedback, including some hope that they will modify their strategies before launch. The things I want can be set or modified at the 13th hour!

2017-04-11

self-calibration

I worked more today on my slides on self-calibration for the 2017 Neugebauer Lecture at Caltech. I had an epiphany, which is that the color–magnitude diagram model I am building with Lauren Anderson (Flatiron) can be seen in the same light as self-calibration. The “instrument” we are calibrating is the physical regularities of stars! (This can be seen as an instrument built by God, if you want to get grandiose.) I also drew a graphical model for the self-calibration of the Sloan Digital Sky Survey imaging data that we did oh so many years ago. It would probably possible to re-do it with full Bayes with contemporary technology!

2017-04-10

causal photometry

Last year, Dun Wang (NYU) and Dan Foreman-Mackey (UW) discovered, on a visit to Bernhard Schölkopf (MPI-IS), that independent components analysis can be used to separate spacecraft and stellar variability in Kepler imaging, and perform variable-source photometry in crowded-field imaging. I started to write that up today. ICA is a magic method, which can't be correct in detail, but which is amazingly powerful straight out of the box.

I also worked on my slides for the 2017 Neugebauer Memorial Lecture at Caltech, which is on Wednesday. I am giving a talk the likes of which I have never given before.

2017-04-07

searches for cosmological estimators

I spent my research time today working through pages of the nearly-complete PhD dissertation of MJ Vakili (NYU). The thesis contains results in large-scale structure and image processing, which are related through long-term goals in weak lensing. In some ways the most exciting part of the thesis for me right now is the part on HST WFC3 IR calibration, in part because it is new, and in part because I am going to show some of these results in Pasadena next week.

In the morning, Colin Hill (Columbia) gave a very nice talk on secondary anisotropies in the cosmic microwave background. He has found a new (and very simple) way to detect the kinetic S-Z effect statistically, and can use it to measure the baryon fraction in large-scale structure empirically. He has found a new statistic for measuring the thermal S-Z effect too, which provides better power on cosmological parameters. In each case, his statistic or estimator is cleverly designed around physical intuition and symmetries. That led me to ask him whether even better statistics might be found by brute-force search, constrained by symmetries. He agreed and has even done some thinking along these lines already.

2017-04-06

direct detection of the cosmic neutrino background

Today was an all-day meeting at the Flatiron Institute on neutrinos in cosmology and large-scale structure, organized by Francisco Villaescusa-Navarro (Flatiron). I wasn't able to be at the whole meeting, but two important things I learned in the part I saw are the following:

Chris Tully (Princeton) astonished me by showing his real, funded attempt to actually directly detect the thermal neutrinos from the Big Bang. That is audacious. He has a very simple design, based on capture of electron neutrinos by tritium that has been very loosely bound to a graphene substrate. Details of the experiment include absolutely enormous surface areas of graphene, and also very clever focusing (in a phase-space sense) of the liberated electrons. I'm not worthy!

Raul Jimenez (Barcelona) spoke about (among other things) a statistical argument for a normal (rather than inverted) hierarchy for neutrino masses. His argument depends on putting priors over neutrino masses and then computing a Bayes factor. This argument made the audience suspicious, and he got some heat during and after his talk. Some comments: One is that he is not just doing simple Bayes factors; he is learning a hierarchical model and assessing within that. That is a good idea. Another is that this is actually the ideal place to use Bayes factors: Both models (normal and inverted) have exactly the same parameters, with exactly the same prior. That obviates many of my usual objections (yes, my loyal reader may be sighing) to computing the integrals I call FML. I Need to read and analyze his argument at some point soon.

One amusing note about the day: For technical reasons, Tully really needs the neutrino mass hierarchy to be inverted (not normal), while Jimenez is arguing that the smart money is on the normal (not inverted).

2017-04-05

a stellar stream with only two stars? And etc

In Stars group meeting, Stephen Feeney (Flatiron) walked us through his very complete hierarchical model of the distance ladder, including supernova Hubble Constant measurements. He can self-calibrate and propagate all of the errors. The model is seriously complicated, but no more complicated than it needs to be to capture the covariances and systematics that we worry about. He doesn't resolve (yet) the tension between distance ladder and CMB (especially Planck).

Semyeong Oh (Princeton) and Adrian Price-Whelan (Princeton) reported on some of their follow-up spectroscopy of co-moving pairs of widely separated stars. They have a pair that is co-moving, moving at escape velocity in the halo, and separated by 5-ish pc! This could be a cold stellar stream detected with just two stars! How many of those will we find! Yet more evidence that Gaia changes the world.

Josh Winn (Princeton) dropped by and showed us a project that, by finding very precise stellar radii, gets more precise planet radii. That, in turn, shows that the super-Earths really split into two populations, super-Earths and mini-Neptunes, with a deficit between. Meaning: There are non-trivial features in the planet radius distribution. He showed some attempts to demonstrate that this is real, reminding me of the whole accuracy vs precision thing, once again.

In Cosmology group meeting, Dick Bond (CITA) corrected our use of “intensity mapping” to “line intensity mapping” and then talked about things that might be possible as we observe more and more lines in the same volume. There is a lot to say here, but some projects are going small and deep, and others are going wide and shallow; we learn complementary things from these approaches. One question is: How accurate do we need to be in our modeling of neutral and molecular gas, and the radiation fields that affect them, in order for us to do cosmology with these observables? I am hoping we can simultaneously learn things about the baryons, radiation, and large-scale structure.

2017-04-04

words on a plane

On the plane home, I worked on my similarity-of-vectors (or stellar twins) document. I got it to the first-draft stage.

2017-04-03

how to add and how to subtract

My only research today was conversations about various matters of physics, astrophysics, and statistics with Dan Maoz (TAU), as we hiked near the Red Sea. He recommended these three papers on how to add and how to subtract astronomical images. I haven't read them yet, but as my loyal reader knows, the word “optimal” is a red flag for me, as in I'm-a-bull-in-a-bull-ring type of red flag. (Spoiler alert: The bull always loses.)

On the drive home Maoz expressed the extremely strong opinion that dumping a small heat load Q inside a building during the hot summer does not lead to any additional load on that building's air-conditioning system. I spent part of my late evening thinking about whether there are any conceivable assumptions under which this position might be correct. Here's one: The building is so leaky (of air) that the entire interior contents of the building are replaced before the A/C has cooled it by a significant amount. That would work, but it would also be a limit in which A/C doesn't do anything at all, really; that is, in this limit, the interior of the building is the same temperature as the exterior. So I think I concluded that if you have a well-cooled building, if you add heat Q internally, the A/C must do marginal additional work to remove it. One important assumption I am making is the following (and maybe this is why Maoz disagreed): The A/C system is thermostatic and hits its thermostatic limits from time to time. (And that is inconsistent with the ultra-leaky-building idea, above.)

2017-04-02

John Bahcall (and etc)

I spent today at Tel Aviv University, where I gave the John Bahcall Astrophysics Lecture. I spoke about exoplanet detection and population inferences. I spent quite a bit of the day with Dovi Poznanski (TAU) and Dani Maoz (TAU). Poznanski and I discussed extensions and alternatives to his projects to use machine learning to find outliers in large astrophysical data sets. This continued conversations with him and Dalya Baron (TAU) from the previous evening.

Maoz and I discussed his conversions of cosmic star-formation history into metal enrichment histories. These involve the SNIa delay times, and they provide new interpretations of the alpha-to-Fe vs Fe-to-H ratio diagrams. The abundance ratios don't drop in alpha-to-Fe when the SNIa kick in (that's the standard story but it's wrong); they kick in when the SNIa contribution to the metal production rate exceeds the core-collapse rate. If the star-formation history is continuous, this can be far after the appearance of the first Ia SNe. Deep stuff.

The day gave me some time to reflect on my time with John Bahcall at the IAS. I have too much to say here, but I found myself in the evening reflecting on his remarkable and prescient scientific intuition. He was one of the few astronomers who understood, immediately on the early failure of HST, that it made more sense to try to repair it than try to replace it. This was a great realization, and transformed both astrophysics and NASA. He was also one of the few physicists who strongly believed that the Solar neutrino problem would lead to a discovery of new physics. Most particle physicists thought that the Solar model couldn't be that robust, and most astronomers didn't think about neutrinos. Boy was John right!

(I also snuck in a few minutes on my stellar twins document, which I gave to Poznanski for comments.