2019-10-31

climate on hot jupiters

A no-research day (Thursdays are always bad) was ended on a great note with a Colloquium by Ian Dobbs-Dixon (NYUAD), who spoke about the atmospheres of hot-jupiter-like exoplanets. He has a great set of equipment that connects the global climate model built for Earth climate modeling with lots of planet-relevant physics (like strong, anisotropic insolation and internal heat flows) to figure out what must be happening on these planets. He showed some nice predictions and also some nice explanations of the observed property (yes observed property) that these planets do not have their hottest point at the sub-stellar point. It's so exciting when we think forward to what might be possible with NASA JWST.

2019-10-30

hierarchical calibration

My main research contribution today was to write some notes for myself and Lily Zhao (Yale) about how we might start to produce a low-dimensional, hierarchical, non-parametric calibration model for the EXPRES spectrograph.

2019-10-29

how does my watch even work?

At the end of a long faculty meeting at NYU Physics, my colleague Shura Grosberg came to me to discuss a subject we have been discussing at a low rate for many months: How is it possible that my watch (my wristwatch) is powered purely by stochastic motions of my arm, when thermal ratchets are impossible? He presented to me a very simple model, in which my watch is seen a set of three coupled systems. One is the winder, which is a low-Q oscillator that works at long periods. The next is the escapement and spring, which is a high-Q oscillator that has a period of 0.2 seconds. The next is the thermal bath of noise to which the watch dissipates energy. If my arm delivers power only on long periods (or mainly on long periods), then it only couples well to the first of these. And then power can flow to the other two systems. Ah, I love physicists!

2019-10-28

milli-charged dark matter

As my loyal reader knows, I love the Brown-Bag talks at the Center for Cosmology and Particle Physics. Today was a great example! Hongwan Liu (NYU) talking about milli-charged dark matter. Putting a charge in the dark sector is a little risky, because the whole point of dark matter is that it is invisible, electromagnetically! But it turns out that if you include enough particle complexity in the dark sector, you can milli-charge the dark matter and move thermal energy from the light sector into the dark sector and vice versa.

Liu was motivated by some issues with 21-cm intensity mapping, but he has some very general ideas and results in his work. I was impressed by the point that his work involves the heat capacity of the dark sector. That's an observable, in principle! And it depends on the particle mass, because a dark sector with smaller particle mass has more particles and therefore more degrees of freedom and more heat capacity! It's interesting to think about the possible consequences of this. Can we rule out very small masses somehow?

2019-10-26

using phase to interpolate between images

Continuing on stuff I got distracted into yesterday (when I should be working on NSF proposals!) I did some work on phase manipulation to interpolate between images. This was: Fourier transform both images, and interpolate in amplitude and phase independently, rather than just interpolate the complex numbers in a vector sense. It works in some respects and not in others. And it works much better on a localized image patch than in a whole image. I made this tweet to demonstrate. This is related to the idea that people who do this professionally use wavelet-like methods to get local phase information in the image instead of manipulating global phase. So the trivial thing doesn't work; I need to learn more!

2019-10-25

substructure, phases, EPRV

Nora Shipp (Chicago) has been in town this week, working with Adrian Price-Whelan to find halo substructures and stellar streams around the Milky Way. The two of them made beautiful animations, paging through distance slices, showing halo stellar density (as measured by a color-magnitude matched filter). There are lots of things visible in those animations! We discussed the point that what makes overdensities appear to the human eye is their coherence through slices.

That made me think of things that Bill Freeman (MIT) and his lab does with amplifying small signals in video: Should we be looking for small overdensities with similar tricks? Freeman's lab uses phase transforms (like Fourier transforms and more localized versions of those) to detect and amplify small motions. Maybe we should use phase transforms here too. That led Price-Whelan and me to hack a little bit on this image pair by Judy Schmidt, which was fun but useless!

Late in the day, Megan Bedell (Flatiron), Lily Zhao (Yale), Debra Fischer (Yale), and I all met to discuss EXPRES data. It turns out that what the EXPRES team has in terms of data, and what they need in terms of technology, is incredibly well aligned with what Bedell and I want to do in the EPRV space. For example, EXPRES has been used to resolve the asteroseismic p-modes in a star. For another, it has made excellent observations of a spotty star. For another, it has a calibration program that wants to go hierarchical. I left work at the end of the day extremely excited about the opportunities here.

2019-10-24

particle phenomenology

Today Josh Ruderman (NYU) gave a great Physics Colloquium, about particle physics phenomenology, from measuring important standard-model parameters with colliders to finding new particles in cosmology experiments. It was very wide-ranging and filled with nice insights about (among other things) thermal-relic dark matter and intuitions about (among other things) observability of different kinds of dark-sector activity. One theme of the dark-matter talks I have seen recently is that most sensible, zeroth-order bounds (like on mass and cross section for a thermal-relic WIMP) can be modified by slightly complexifying the problem (like by adding a dark photon or another dark state). Ruderman navigated a bunch of that for us nicely, and convinced us that there is lots to do in particle theory, even if the LHC remains in a standard-model desert.

2019-10-22

more brokers

Our LSST broker discussions from yesterday continued at the Cosmology X Machine Learning group meeting at Flatiron. The group helped us think a little bit about the supervised and unsupervised options in the time-domain space.

2019-10-21

LSST broker dreams

My day ended with a long conversation with Sjoert van Velzen (NYU), Tyler Pritchard (NYU), and Maryam Modjaz (NYU), about possible things we could be doing in the LSST time-domain and broker space. Our general interest is in finding interesting and unusual and outlier events that are interesting either because they are unprecedented, or because they are unusual within some subclass, or because they imply odd physical parameters or strange conditions. But we don't have much beyond that! We need to get serious in the next few months because there will be proposal calls.

2019-10-19

complexifying optimal extraction

As my loyal reader knows, I have opinions about spectroscopic extraction—the inference of the one-dimensional spectrum of an object as a function of wavelength, given the two-dimensional image of the spectrum in the spectrograph detector plane. The EXPRES team (I happen to know) and others have the issue with their spectrographs that the cross-dispersion direction (the direction precisely orthogonal to the wavelength direction) is not always perfectly aligned with the y direction on the detector. This is a problem because if it is aligned, there are very simple extraction methods available.

I spent parts of the day writing down not the general solution to this problem (which might possibly be Bolton & Schlegel's SpectroPerfectonism, although I have issues with that too), but rather with an expansion around the perfectly-aligned case, that leads to an iterative solution, but preserving the solutions that work at perfect alignment. It's so beautiful! As expansions usually are.

What to call this? I am building on Zechmeister et al's “flat-relative optimal extraction”. But I'm allowing tilts. So Froet? Is that a rude word in some language?

2019-10-18

combining data; learning rates

Marla Geha (Yale) crashed Flatiron today and we spent some time talking about a nice problem in spectroscopic data analysis: Imagine that you have a pipeline that works on each spectrum (or each exposure or each plate or whatever) separately, but that the same star has been observed multiple times. How do you post-process your individual-exposure results so that you get combined results that are the same as you would have if you had processed them all simultaneously. You want the calibration to be independent for each exposure, but he stellar template to be the same, for example. This is very related to the questions that Adrian Price-Whelan (Flatiron) and I have been solving in the last few weeks. You have to carry forward enough marginalized likelihood information to combine later. This involves marginalizing out the individual-exposure parameters but not the shared parameters. (And maybe making some additional approximations!)

As is not uncommon on a Friday, Astronomical Data Group meeting was great! So many things. One highlight for me was that Lily Zhao (Yale) has diagnosed—and figured out strategies related to—problems we had in wobble with the learning rate on our gradient descent. I hate optimization! But I love it when very good people diagnose and fix the problems in our optimization code!

2019-10-17

intuitions about marginal likelihoods

Thursdays are low research days. I did almost nothing reportable here according to The Rules. I did have a valuable conversation with Price-Whelan (Flatiron) about marginalized likelihoods, and I started to get an intuition about why our factorization of Gaussian products has the form that it has. It has to do with the fact that the marginalized likelihood (the probability of the data, fully marginalizing out all linear parameters) permits or has variance for the data that is a sum in quadrature of the noise variance and the model variance. Ish!

2019-10-16

code from the deep past

I had an amusing email from out of the blue, asking me to dig up the IDL (yes, IDL) code that I (and Blanton and Bovy and Johnston and Roweis and others) wrote to analyze the local velocity field using the ESA Hipparcos data. Being a huge supporter of open science, I had to say yes to this request. I dug through old cvs repositories (not svn, not git, but cvs) and found the code, and moved it to Github (tm) here. I didn't truly convert the cvs repo to git, so I erased history, which is bad. But time is precious, and I could always fix that later. I hereby apologize to my co-authors!

All this illustrates to me that it is very good to put your code out in the open. One reason is that then you don't have to go digging like this; a simple google search would have found it! Another is that when you know your code will be out in the open, you are (at least slightly) more likely to make it readable and useable by others. I dug up and threw to the world this code, but will anyone other than the authors ever be able to make any use of it? Or even understand it? I don't know.

2019-10-15

calibrating a fiber spectrograph

I had my weekly call with Ana Bonaca (Harvard) this morning, where she updated me on our look at systematic effects in the radial-velocity measurements we are getting out of Hectochelle. We see very small velocity shifts in stellar radial velocities across the field of view that seem unlikely to be truly in the observed astrophysical stellar systems we are observing. At this point, Bonaca can show that these velocity shifts do not appear in the sky lines; that is, the calibration (with arc lamps) of the wavelengths on the detector is good.

All I have left at this point is that maybe the stars illuminate the fibers differently from the sky (and arc lamps) and this difference in illumination is transmitted to the spectrograph. I know how to test that, but it requires observing time; we can't do it in the data we have in hand right now. This is an important thing for me to figure out though, because it is related to how we commission and calibrate the fiber robot for SDSS-V. Next question: Will anyone give us observing time to check this?

2019-10-11

nothing

Today was almost all admin and teaching. But I did get to the Astronomical Data Group meeting at Flatiron, where we had good discussions of representation learning, light curves generated by spotted stars, the population of planets around slightly evolved stars, and accreted stellar systems in the Milky Way halo!

2019-10-10

image denoising; the space of natural images

I got in a bit of research in a mostly-teaching day. I saw the CDS Math-and-Data seminar, which was by Peyman Milanfar (Google) about de-noising models. In particular, he was talking about some of the theory and ideas behind the de-noising that Google uses in its Pixel cameras and related technology. They use methods that are adaptive to the image itself but which don't explicitly learn a library of image priors or patch priors or anything like that from data. (But they do train the models on human reactions to the denoising.)

Milanfar's theoretical results were nice. For example: De-noising is like a gradient step in response to a loss function! That's either obvious or deep. I'll go with deep. And good images (non-noisy natural images) should be fixed points of the de-noising projection (which is in general non-linear). Their methods identify similar parts of the images and use commonality of those parts to inform the nonlinear projections. But he explained all this with very simple notation, which was nice.

After the talk I had a quick conversation with Jonathan Niles-Weed (NYU) about the geometry of the space of natural images. Here's a great argument he gave: Imagine you have two arbitrarily different images, like one of the Death Star (tm) and one of the inside of the seminar room. Are these images connected to one another in the natural-image subspace of image space? That is, is there a continuous transformation from one to the other, every point along which is itself a good natural image?

Well, if I can imagine a continuous tracking shot (movie) of me walking out of the seminar room and into a spaceship and then out of the airlock on a space walk to repair the Death Star (tm), and if every frame in that movie is a good natural image, and everything is continuous, then yes! What a crazy argument. The space of all natural images might be one continuously connected blob. Crazy! I love the way mathematicians think.

2019-10-09

finding very long signals in very short data streams

So many things. I love Wednesdays. Here's one: I spent a lot of the day working with Adrian Price-Whelan (Flatiron) on our issues with The Joker. We found some simple test cases, we made a toy version that has good properties, we compared to the code. Maybe we found a sign error!? But all this is in service of a conceptual data-analysis project I want to think about much more: What can you say about signals with periodicity (or structure) on time scales far, far longer than the baseline of your observations? Think long-period companions in RV surveys or Gaia data. Or the periods of planets that transit only once in your data set. Or month-long asteroseismic modes in a giant star observed for only a week. I think it would be worth getting some results here (and I am thinking information theory) because I think there will be some interesting scalings (like lots of things might have precisions that scale better (faster I mean) than the square-root of time baseline).

In Stars & Exoplanets meeting at Flatiron, many cool things happened! But a highlight for me was a discovery (reported by Saurabh Jha of Rutgers) that the bluest type Ia supernovae are more standardizeable (is that a word?) candles than the redder ones. He asked us how to combine the information from all supernovae with maximum efficiency. I know how to do that! We opened a thread on that. I hope it pays off.

2019-10-08

Planck maps

Today Kristina Hayhurst (NYU) came to my office and, with a little documentation-hacking, we figured out how to read and plot ESA Planck data or maps released in the Planck archive! I am excited, because there is so much to look at in these data. Hayhurst's project is to look at the “Van Gogh” plot of the polarization: Can we do this better?

2019-10-07

connections between the dark and standard sectors

In the CCPP Brown-Bag seminar today, Neal Weiner (NYU) spoke about the possible connections between the dark sector (where dark matter lives) and our sector (where the standard model lives). He discussed the WIMP miracle, and then where we might look in phenomenology space for the particle interactions that put the WIMPs or related particles in equilibrium with the standard-model particles in the early Universe.

In the afternoon, I worked with Abby Shaum (NYU) and Kate Storey-Fisher (NYU) to get our AAS abstracts ready for submission for the AAS Winter Meeting in Honolulu.

2019-10-06

got it!

Adrian Price-Whelan (Flatiron) and I spent time this past week trying to factorize products of Gaussians into new products of different Gaussians. The context is Bayesian inference, where you can factor the joint probability of the data and your parameters into a likelihood times a prior or else into an evidence (what we here call the FML) times a posterior. The factorization was causing us pain this week, but I finally got it this weekend, in the woods. The trick I used (since I didn't want to expand out enormous quadratics) was to use a determinant theorem to get part of the way, and some particularly informative terms in the quadratic expansion to get the rest of the way. Paper (or note or something) forthcoming...

2019-10-04

mitigating p-modes in EPRV

Megan Bedell (Flatiron) and I continued our work from earlier this week on making a mechanical model of stellar asteroseismic p-modes as damped harmonic oscillators driven by white noise. Because the model is so close to closed-form (it is closed form between kicks, and the kicks are regular and of random amplitude), the code is extremely fast. In a couple minutes we can simulate a realistic, multi-year, dense, space-based observing campaign with a full forest of asteroseismic modes.

The first thing we did with our model is check the results of the recent paper on p-mode mitigation by Chaplin et al, which suggest that you can obtain mitigation of p-mode noise in precision radial-velocity observation campaigns by good choice of exposure time. We expected, at the outset, that the results of this paper are too optimistic: We expected that a fixed exposure time would not do a good job all the time, given the stochastic nature of the driving of the modes, and that there are many modes in a frequency window around the strongest modes. But we were wrong and the Chaplin et al paper is correct! Which is good.

However, we believe that we can do better than exposure-time-tuning for p-mode mitigation. We believe that we can fit the p-modes with the (possibly non-stationary) integral of a stationary Gaussian process, tuned to the spectrum. That's our next job.

2019-10-02

rotating stars, a mechanical model for an asterosesmic mode

Our weekly Stars and Exoplanets Meeting at Flatiron was all about stellar rotation somehow this week (no we don't plan this!). Adrian Price-Whelan (Flatiron) showed that stellar rotations can get so large in young clusters that stars move off the main sequence and the main sequence can even look double. We learned (or I learned) that a significant fraction of young stars are born spinning very close to break-up. This I immediately thought was obviously wrong and then very quickly decided was obvious: It is likely if the last stages of stellar growth are from accretion. Funny how an astronomer can turn on a dime.

And in that same meeting, Jason Curtis (Columbia) brought us up to date on his work on on stellar rotation and its use as a stellar clock. He showed that the usefulness is great (by comparing clusters of different ages); it looks incredible for at least the first Gyr or so of a stars lifetime. But the usefulness decreases at low masses (cool temperatures). Or maybe not, but the physics looks very different.

In the morning, before the meeting, Megan Bedell (Flatiron) and I built a mechanical model of an asteroseismic mode by literally making a code that produces a damped, driven harmonic oscillator, driven by random delta-function kicks. That was fun! And it seems to work.

2019-10-01

testing and modifying GR

The highlight of a low-research day was a great NYU Astro Seminar by Maria Okounkova (Flatiron) about testing or constraining extensions to general relativity using the LIGO detections of black hole binary inspirals. She is interested in terms in a general expansion that adds to Einstein's equations higher powers of curvature tensors and curvature scalars. One example is the Chern–Simons modification, which adds some anisotropy or parity-violation. She discussed many things, but the crowd got interested in the point that the Event Horizon Telescope image of the photon sphere (in principle) constrains the Chern–Simons terms! Because the modification distorts the photon sphere. Okounkova emphasized that the constraints on GR (from both gravitational radiation and imaging) get better as the black holes in question get smaller and closer. So keep going, LIGO!