radial velocity with a gas cell

I had a great meeting today with Matt Daunt (NYU), Lily Zhao (Flatiron), and Megan Bedell (Flatiron), in which we described what we need to do to fit a data-driven model to extreme-precision radial-velocity spectra that are taken with a gas cell in the light path. The gas cell imprints lines from known atomic transitions and secures the wavelength calibration of the device. How to use this in a data-driven model. We started by talking about line-spread functions and wavelength solutions and ended up asking some deep questions, like: Do you really need to know the wavelength solution in order to measure a change in radial velocity? I am not sure you do!


Dr Steven Mohammed

Today I had the great pleasure to serve on the PhD defense committee for Steven Mohammed (Columbia). Mohammed worked on the post-GALEX-misson GALEX data in the Milky Way plane. We took the data specially as part of the Caltech/Columbia takeover of the mission at the end of the NASA mission lifetime. Mohammed and my former student Dun Wang built a pipeline to reduce the data and produce catalogs. And Mohammed has done work on photometric metallicities and the abundances in the Milky Way disk using the data. It is a beautiful data set and a beautiful set of results.


does the Milky Way bar have distinct chemistry?

The answer to the post title question is NO, it doesn't. Eilers (MIT), Rix (MPIA), and I discussed Eilers's project on this subject today. If you use the APOGEE data naively, you can conclude that the bar has different abundances. But your conclusion is wrong, because the abundances have surface-gravity-related systematic issues, and the Galactic center is only visible in the most luminous (lowest surface-gravity) stars. So we have corrected that and have a paper to write. Even after correcting for this systematic, there are still interesting systematics, and we don't know what they are, yet.


making age estimates out of abundances

Today Trevor David (Flatiron) showed me some amazingly precise and detailed dependences of abundance ratios on stellar age. The idea is: Different stars formed in different moments in the chemical-enrichment history of the Milky Way, and so the abundance ratios give the stellar ages in detail. The abundances he has are from Brewer and Fischer, and the ages he has are from various sources, including especially isochrone ages. We discussed the following problem:

Given that you don't believe any age estimates in detail, and given that any abundance measurements and age estimates are noisy and biased, what is the best way to build usable abundance–age relationships that can be used as alternative clocks (alternative to isochrones, stellar rotation, asteroseismology, and C–N dredge-up, for examples)? We settled on a few ideas, most of which involve building a low-dimensional hypersurface in the space of abundances and age, and then fitting for adjustments or corrections to different age systems.


specification of a possible cosmology project in terms of spherical harmonics

Today Kate Storey-Fisher (NYU) and I asked ourselves: How can we do machine learning on cosmological simulations? If we want to use permutation-invariant methods, we need to use something like graph structure, and you can't currently execute graph neural networks at the scale of millions or billions of particles. So we need permutation-invariant scalars and vectors produced from point clouds. We discussed a set of options in terms of taking spherical-harmonic transforms in shells, and then combining those into scalars and vectors. There's a lot of fun geometry involved! Not sure if we have a plan, yet.


how to run community-building meetings

Rodrigo Luger (Flatiron) convened a lunch meeting today to discuss the big weekly community meetings (themed around stars and exoplanets) that we run here in New York City: How to manage them so we have a culture of informal scientific interaction, community building, learning, and constructive contribution to each others' research programs. We had a wide-ranging conversation and came up with some ideas for this new semester. We like to learn about (and contribute to) ideas, concepts, and methods. We are less interested in finished, polished talks or presentations. So we might move to a mode where you speak a bit about your project, but you aren't permitted to give your results! That might make it a meeting of scientific introductions and reviews? Worth a try. One of the big themes, of course: How to adapt to the hybrid world we live in now, where things are neither fully in-person nor fully remote.



Today was a teaching-and-life day. Not much research got done!


a conditional form of domain adaptation?

I have been discussing with Soledad Villar (JHU) projects related to domain adaptation (also with Thabo Samakhoana and Katherine Alsfelder), in which you have two different instruments (say) taking data and you want to find the transformation between the instruments such that the data are the same from the two sources. The modification we want (or need) to make is to create a conditional version of this: In SDSS-V, there are two observatories; they take very similar (but not identical) data. What are the transformations that make the data identical? The problem is: The two observatories also observe different stars on average (because they see different parts of the Galaxy). So we need to find the transformations that make the data identical, conditional on other data (like the ESA Gaia data) that we have for the stars. Great problem, and we came up with some non-elegant solutions. Are there also elegant solutions?


awakening old projects

Megan Bedell (Flatiron) and I discussed what it would take to resurrect and finish one of our unfinished papers today. Not much, we think! But it's noticeable that at the end of this pandemic I have at least six papers that are 75-percent done.


new cosmological tests with LIGO

Kate Storey-Fisher (NYU) and I had a wide-ranging conversation with Will Farr (Flatiron) about uses of LIGO sources for cosmological studies. We all have the intuition that there is lots of space in the literature for new approaches, but we can't quite figure out where to position ourselves. I like the idea of going fully frequentist, because most of the competition is purely Bayesian. The ideas I like best so far involve something like correlations between LIGO error boxes and large-scale structure surveys.