I spent the last two days at the National Society of Black Physicists meeting in Providence RI. It was a great meeting, with a solid mix of traditional physics, strategizing about the state of the profession, and offline conversations about politics and the many communities of physicists. Many great things happened. Here are some random highlights: I learned from Bryen Irving (Stanford) that the harder neutron-star equations of state lead to larger tidal effects on binary inspiral. After all, harder state means larger radius, larger radius means more tidal distortion to the surface equipotential. Deep! I enjoyed very much a comment by Richard Anantua (Harvard) about “the importance of late-time effects on one's career”. He was talking about the point that there are combinatorially many ways to get from point A to point B in your career, and it is your current state that matters most. Beautiful! There was an excellent talk by Joseph Riboudo (Providence College) that was simultaneously about how to influence the community with a Decadal-survey white paper and about primarily undergraduate institutions and how we should be serving them as a community. He was filled with wisdom! And learning. Eileen Gonzalez (CUNY) showed her nice results understanding incredibly cool (and yes, I mean low-temperature) star binaries. She is finding that data-driven atmospheric retrieval methods plus clouds work better than grids of ab initio models. That's important for the JWST era. And I absolutely loved off-session chatting with Dara Norman (NOAO) and others. Norman is filled with conspiracy theories and I have to tell you something: They are all True. Norman also deserves my thanks for organizing much of the astrophysics content at the meeting. It was a great couple of days.
Grace Telford (Rutgers) showed up in NYC today and we discussed the inference of star-formation histories from observations of resolved stellar populations. We discussed the point that the space being high dimensional (because, say, the star formation history is modeled as a set of 30-ish star-formation rates in bins), which leads to two problems. The first is that a maximum-likelihood or maximum-a-posteriori setting of the SFH will be atypical (in high dimensions, optima are atypical relative to one-sigma-ish parameter settings). The second is that the results are generally extremely prior-dependent, and the priors are usually made up by investigators, not any attempt to represent their actual beliefs. We talked about ways to mitigate against these issues.
As my loyal reader knows, I am working with Lily Zhao (Yale) to calibrate the EXPRES spectrograph. Our approach is non-parametric: We can beat any polynomial calibration with an interpolation (we are using splines, but one could also use a Gaussian Process or any other method, I think). The funniest thing happened today, which surprised me, but shouldn't have! When Zhao plotted a histogram of the differences between our predicted line locations (from our interpolation) and the observed line locations (of held-out lines, held out from the interpolation), they were always redshifted! There was a systematic bias everywhere. We did all sorts of experiments but could find no bug. What gives? And then we had a realization which is pretty much Duh:
If you are doing linear interpolation (and we were at this point), and if your function is monotonically varying, and if your function's first derivative is also monotonically varying, the linear interpolator will always be biased to the same side! Hahaha. We switched to a cubic spline and everything went unbiased.
In detail, of course, interpolation will always be biased. After all, it does not represent your beliefs about how the data are generated, and it certainly does not represent the truth about how your data were generated. So it is always biased. It's just that once we go to a cubic spline, that bias is way below our precision and accuracy (under cross-validation). At least for now.
I had a meeting with Emily Cunningham (Flatiron) to discuss any projects of mutual interest. She has been looking at simulations of the Milky Way (toy simulations) in which the LMC and SMC fall in. These simulations get tidally distorted by the infall, and various observational consequences follow. For example, the disk ends up having a different mean velocity than the halo! And for another, different parts of the halo move relative to one another, in the mean. Cunningham's past work has been on the velocity variance; now it looks like she has a project on the velocity mean! The predictions are coming from toy simulations (from the Arizona group) but I'm interested in the more general question of what can be learned from spatial variations in the mean velocity in the halo. It might put strong constraints on the recent-past time-dependence.
Oh what a great day! Not a lot of research got done; NSF proposals, letters of recommendation, and all that. But in the afternoon, undergraduate researcher Abby Shaum (NYU) and I looked at her project to do frequency demodulation on asteroseismic modes to find orbital companions and we got one. Our target is a hot star that has a few very strong asteroseismic modes (around 14 cycles per day in frequency), and our demodulator is actually a phase demodulator (not frequency) but it's so beautiful:
The idea of the demodulator is that you mix (product) the signal (which, in this case, is bandpass-filtered NASA Kepler photometric data) with a complex sinusoid at (as precisely as you can set it) the asteroseismic carrier frequency. Then you Gaussian smooth the real and imaginary parts of that product over some window timescale (the inverse bandwidth, if you will). The resulting extremely tiny phase variations (yes these stars are coherent over years) have some periodogram or power spectrum, which shows periodicity at around 9 days, which is exactly the binary period we expected to find (from prior work).
I'm stoked! the advantages of our method over previous work are: Our method can easily combine information from many modes. Our method can be tuned to any modes that are in any data. We did not have to bin the lightcurve into bins; we only had to choose an effective bandwidth. The disadvantages are: We don't have a probabilistic model! We just have a procedure. But it's so simple and beautiful. I'm feeling like the engineer I was born to be.
It was a great research day today. I worked with Lily Zhao (Yale) on the wavelength calibration of the EXPRES spectrograph, which my loyal reader knows is a project of Debra Fischer (Yale). Lily and I cleaned up and sped up (by a lot) the polynomial fitting that the EXPRES team is doing, and showed (with a kind of cross-validation) that the best polynomial order for the fit is in the range 8 to 9. This is for a high-resolution, laser-frequency-comb-calibrated, temperature-controlled, bench-mounted, dual-fiber spectrograph.
But then we threw out that polynomial fit and just worked on interpolating the laser frequency-comb line positions. These are fixed in true wavelength and dense on the detector (for many orders, anyway). Oh my goodness did it work! When we switched from polynomial fitting to interpolation, the cross-validation tests got much better, and the residuals went from being very structured and repeatable to looking like white noise. When we averaged solutions, we got very good results, and when we did a PCA of the differences away from the mean solution, it looks like the variations are dominated by a single variability dimension! So it looks like we are going to end up with a very very low-dimensional, data-driven, non-parametric calibration system that hierarchically pools information from all the calibration data to calibrate every single exposure. I couldn't be more stoked!
A no-research day (Thursdays are always bad) was ended on a great note with a Colloquium by Ian Dobbs-Dixon (NYUAD), who spoke about the atmospheres of hot-jupiter-like exoplanets. He has a great set of equipment that connects the global climate model built for Earth climate modeling with lots of planet-relevant physics (like strong, anisotropic insolation and internal heat flows) to figure out what must be happening on these planets. He showed some nice predictions and also some nice explanations of the observed property (yes observed property) that these planets do not have their hottest point at the sub-stellar point. It's so exciting when we think forward to what might be possible with NASA JWST.
My main research contribution today was to write some notes for myself and Lily Zhao (Yale) about how we might start to produce a low-dimensional, hierarchical, non-parametric calibration model for the EXPRES spectrograph.
At the end of a long faculty meeting at NYU Physics, my colleague Shura Grosberg came to me to discuss a subject we have been discussing at a low rate for many months: How is it possible that my watch (my wristwatch) is powered purely by stochastic motions of my arm, when thermal ratchets are impossible? He presented to me a very simple model, in which my watch is seen a set of three coupled systems. One is the winder, which is a low-Q oscillator that works at long periods. The next is the escapement and spring, which is a high-Q oscillator that has a period of 0.2 seconds. The next is the thermal bath of noise to which the watch dissipates energy. If my arm delivers power only on long periods (or mainly on long periods), then it only couples well to the first of these. And then power can flow to the other two systems. Ah, I love physicists!
As my loyal reader knows, I love the Brown-Bag talks at the Center for Cosmology and Particle Physics. Today was a great example! Hongwan Liu (NYU) talking about milli-charged dark matter. Putting a charge in the dark sector is a little risky, because the whole point of dark matter is that it is invisible, electromagnetically! But it turns out that if you include enough particle complexity in the dark sector, you can milli-charge the dark matter and move thermal energy from the light sector into the dark sector and vice versa.
Liu was motivated by some issues with 21-cm intensity mapping, but he has some very general ideas and results in his work. I was impressed by the point that his work involves the heat capacity of the dark sector. That's an observable, in principle! And it depends on the particle mass, because a dark sector with smaller particle mass has more particles and therefore more degrees of freedom and more heat capacity! It's interesting to think about the possible consequences of this. Can we rule out very small masses somehow?
Continuing on stuff I got distracted into yesterday (when I should be working on NSF proposals!) I did some work on phase manipulation to interpolate between images. This was: Fourier transform both images, and interpolate in amplitude and phase independently, rather than just interpolate the complex numbers in a vector sense. It works in some respects and not in others. And it works much better on a localized image patch than in a whole image. I made this tweet to demonstrate. This is related to the idea that people who do this professionally use wavelet-like methods to get local phase information in the image instead of manipulating global phase. So the trivial thing doesn't work; I need to learn more!
Nora Shipp (Chicago) has been in town this week, working with Adrian Price-Whelan to find halo substructures and stellar streams around the Milky Way. The two of them made beautiful animations, paging through distance slices, showing halo stellar density (as measured by a color-magnitude matched filter). There are lots of things visible in those animations! We discussed the point that what makes overdensities appear to the human eye is their coherence through slices.
That made me think of things that Bill Freeman (MIT) and his lab does with amplifying small signals in video: Should we be looking for small overdensities with similar tricks? Freeman's lab uses phase transforms (like Fourier transforms and more localized versions of those) to detect and amplify small motions. Maybe we should use phase transforms here too. That led Price-Whelan and me to hack a little bit on this image pair by Judy Schmidt, which was fun but useless!
Late in the day, Megan Bedell (Flatiron), Lily Zhao (Yale), Debra Fischer (Yale), and I all met to discuss EXPRES data. It turns out that what the EXPRES team has in terms of data, and what they need in terms of technology, is incredibly well aligned with what Bedell and I want to do in the EPRV space. For example, EXPRES has been used to resolve the asteroseismic p-modes in a star. For another, it has made excellent observations of a spotty star. For another, it has a calibration program that wants to go hierarchical. I left work at the end of the day extremely excited about the opportunities here.