2024-03-12

The Cannon and El Cañon

At the end of the day I got a bit of quality time in with Danny Horta (Flatiron) and Adrian Price-Whelan (Flatiron), who have just (actually just before I met with them) created a new implementation of The Cannon (the data-driven model of stellar photospheres originally created by Melissa Ness and me back in 2014/2015). Why!? Not because the world needs another implementation. We are building a new implementation because we plan to extend out to El Cañon, which will extend the probabilistic model into the label domain: It will properly generate or treat noisy and missing labels. That will permit us to learn latent labels, and de-noise noisy labels.

2024-03-11

black holes as the dark matter

Today Cameron Norton (NYU) gave a great brown-bag talk on the possibility that the dark matter might be asteroid-mass-scale black holes. This is allowed by all constraints at present: If the masses are much smaller, the black holes evaporate or emit observably. If the black holes are much smaller, they would create observable microlensing or dynamical signatures.

She and Kleban (NYU) are working on methods for creating such black holes primordially, by modifying hte potential at inflation, creating opportunities for bubble nucleations in inflation that would subsequently collapse into small black holes after the Universe exits inflation. It's speculative obviously, but not ruled out at present!

An argument broke out during and after the talk whether you would be injured if you were intersected by a 1020 g black hole! My position is that you would be totally fine! Everyone else in the room disagreed with me, for many different reasons. Time to get calculating.

Another great idea: Could we find stars that have captured low-mass black holes by looking for the radial-velocity signal? I got really interested in this one at the end.

2024-03-10

APOGEE spectra as a training set

I spent a lot of the day building a training set for a machine-learning problem set. I am building the training set out of the SDSS-V APOGEE spectra, which are like one-dimensional images for training CNNs and other kinds of deep learning tasks. I wanted relatively raw data, so I spent a lot of time going deep in the SDSS-V data model and data directories, which are beautiful. I learned a lot, and I created a public data set. I chose stars in a temperature and log-gravity range in which I think the APOGEE pipelines work well and the learning problem should work. I didn't clean the data, because I am hoping that contemporary deep learning methods should be able to find and deal with outliers and data issues. If you want to look at my training set (or do my problem set), start here.

2024-03-09

getting the absolutely rawest APOGEE data

I spent time today (at the bar!) understanding the data model and directory structure for the raw, uncalibrated APOGEE data. The idea is that I want to do a real-data example for my paper with Casey (Monash) on combining spectra, and I want to get back to the raw inputs. I also might use these spectra for a problem set in my machine-learning class. The code I wrote is all urllib and request and re, because I think it is necessary to read directories to understand the data dependencies in the survey. Is that bad?

Putting aside my concerns: The coolest thing about this project is that the SDSS family of projects (currently SDSS-V) puts absolutely every bit of its data on the web, in raw and reduced form, for re-analysis at any level or stage. That's truly, really, open science. If you don't believe me, check out this this code that spelunks the raw data. It's all just URL requests with no authentication!

2024-03-08

combining spectral exposures

I wrote words! I got back to actually doing research this week, in part inspired by a conversation with my very good friend Greg McDonald (Rum & Code). I worked on the words in the paper I am finishing with Andy Casey (Monash) about how to combine individual-visit exposures into a mean spectrum. The biggest writing job I did today was the part of the paper called “implementation notes”, which talks about how to actually implement the math on a finite computer.

2024-02-12

the transparency of the Universe and the transparency of the university

The highlight of my day was a wide-ranging conversation with Suroor Gandhi (NYU) about cosmology, career, and the world. She made a beautiful connection between a part of our conversation in which we were discussing the transparency of the Universe, and new ways to study that, and a part in which we were discussing the transparency with which the University speaks about disciplinary and rules cases, which (at NYU anyway) is not very good. Hence the title of this post. On transparency of the Universe, we discussed the fact that distant objects (quasars, say) do not appear blurry must put some limit on cosmic transparency. On transparency of the University, we discussed the question of how much do we care about the behavior of our institutions, and changing those behaviors. I'm a big believer in open science, open government, and open institutions.

I've been privileged these years to have some very thoughtful scientists in my world. Gandhi is one of them.

2024-01-22

Betz limit for sailboats?

In the study of sustainable energy, there is a nice result on windmills, called the Betz limit: There is a finite limit to the fraction of the kinetic energy of the wind that a windmill can absorb or exploit. The reason is often stated as: If the windmill took all of the power in the wind, the wind would stop, and then there would be no flow of energy over the windmill. I'm not sure I exactly agree with that explanation, but let's leave that here.

On my travel home today I worked on the possibility that there is an equivalent to the Betz limit for sailboats. Is there an energetic way of looking at sailing that is useful?

One paradox is that a sailboat is sailing steadily when the net force on the boat is zero (just like when a windmill is turning at constant angular velocity). In the Betz limit, the windmill is thought of as having two different torques on it, one from the wind, and one from the turbine. Sailing has no turbine. So this problem has a conceptual component to it.

2024-01-19

Happy birthday, Rix

Today was an all-day event at MPIA to celebrate the 60th birthday (and 25th year as Director) of Hans-Walter Rix (MPIA). There were many remarkable presentations and stories; he has left a trail of goodwill wherever he has gone! I decided to use the opportunity to talk about measurement, which is something that Rix and I have discussed for the last 18 years. My slides are here.

I've been very lucky with the opportunities I've had to work with wonderful people.

2024-01-14

divide by your selection function, or multiply by it?

With Kate Storey-Fisher (San Sebastián), Abby Williams (Caltech) is working on a paper about large-angular-scale power, or anisotropy, in the distribution of quasars. It is a great subject; we need to estimate this power in the context of a very non-trivial all-sky selection function. The tradition in cosmology is to divide the data by this selection function. But of course you shouldn't manipulate your data. Instead, you could multiply your model by the selection function. You can guess which one I prefer! In fact you can do either, as long as you weight the data in the right way in the fit. I promised to write up a few words and equations about this for Williams.

2024-01-11

why study astrophysics?

I spent the day with Neige Frankel (CITA), working on various projects. One of the things we discussed was her slides for an upcoming talk. I made the following blanket statement; is it true? There are only two ways to ultimately justify a subject of study in astrophysics. Either it will tell us something important about fundamental physics (think: dark matter, initial conditions of the Universe, or nucleosynthesis, say), or else it will tell us something about our origins (formation of our Galaxy, occurrence of rocky, habitable planets, origin of life, say). I am not entirely sure this is right, but I can't currently think of much in the way of counter-examples. I guess one other justification might be that we are developing technologies that will help people in other areas (CCDs, spacecraft attitude management, or machine learning, say).

2024-01-09

Galactic cartography

Neige Frankel (CITA) and I discussed measurements of the age and metallicity gradients in the Milky Way today. In my machine-learning world, I am working on biases that come in when you use the outputs of regressions (label transfer) to perform population inferences (like mean age as a function of actions or radius). We are gearing up to do a fake but end-to-end simulation of how the Milky Way gets observed, to see if the observed Galaxy looks anything like (what we know in this fake world to be) the truth.

2024-01-08

auto-encoder for calibration data

Connor Hainje (NYU) is looking at whether we could build a hierarchical or generative model of SDSS-V BOSS spectrograph calibration data, such that we could reduce the survey's per-visit calibration overheads. He started by building an auto-encoder, which is a simple, self-supervised generative model. It works really well! We discussed how to judge performance (held-out data) and how performance should depend on the size of the latent space (I predict that it won't want a large latent space). We also decided that we should announce an SDSS-V project and send out a call for collaboration.

[Note added later: Contardo (SISSA) points out that an autoencoder is not a generative model. That's right, but there are multiple definitions of generative model; only one of which is that you can sample from it. Another is that it is a parameterized model that can predict the data. Another is that it is a likelihood function for the parameters. But she's right: We are going to punk parts of the auto-encoder into a generative model in the sense of a likelihood function.]

2024-01-05

what book am I going to write?

One possible new year's resolution this year is for me to decide which book am I going to write? I don't love this, because it is the hallmark of a scientist at the end of the career that they switch to writing books! I guess maybe I'm at the end of my career? But that said, I have (maybe like many scientists at the end of their careers?) a lot to say. Okay anyway, I had a long conversation this morning with Greg McDonald (Rum&Code) about all this, and he strongly encouraged me to make some content for the project code-named ”The Practice of Astrophysics“.

2024-01-03

wind power

I met up with Matt Kleban (NYU) to discuss our dormant project on the physics of sailing. Our conversation ranged around many different things related to sustainable power. In particular, we discussed whether it was possible to take a energy or power point of view on sailing, which has to do with the work that the sailboat is doing on the water and on the air. I feel like there will be some symmetries in play there. We also discussed power generation with wind farms, including the Betz limit (which is a limit on how much power you can get out of the wind). Is there an equivalent of the Betz limit for a sailboat? Finally, Kleban made a remark that is simultaneously obvious and deep: If you have a propeller turning in a fluid (like air), it might be a turbine (generating power from the wind) or a fan (using power to make wind). The question of turbine or fan has a frame-independent (relativistically scalar) answer.

2024-01-02

informal scientific communication

I have been sending out my draft manuscript on machine learning in the natural sciences to various people I know who have opinions on this. I've been getting great feedback, and it reminds me that there is a lot of important scientific communication that is on informal channels. One thing that interests me: Is there a way to make such conversation more public and viewable and research-able?