2023-10-31

2023-10-30

who owns a research project?

My day ended today with a great conversation about the ownership of research projects with a postdoc. When you make the transition from graduate student to postdoc, whose projects are whose? Are they the projects of your supervisors, or are they the projects of you? And should you keep doing them, or should you move to new things? I don't think there are easy answers, and I think that there are many subtle ways in which people have unresolved differences about these things. Since much of my work these days is postdoctoral mentoring, I've thought about this a lot. My only recommendation, which is hard to implement, is that clear communication about expectations is really, really important. And not just the expectations of the supervisors; the expectations of the (former) student are way more important!

2023-10-29

area of a triangle?

On Friday and the weekend, I came up with (what I think is) a novel formula for the area A of a triangle! That's weird. I was looking for a formula in the Deep Sets (or map-reduce) format. Here it is. It's ridiculous and useless, but it involves only sums over functions of the individual corners of the triangle. It was hard to find! But it's exact (I believe).

2023-10-25

information theory for spectroscopy

I had a meeting this morning with Megan Bedell (Flatiron) about our dormant paper about information theory and extreme-precision radial-velocity measurements. We see the paper a bit differently (is it about methods or is it about concepts?), but we were able to re-state a scope with which we are both happy. We assigned tasks (Bedell writing and me coding, mainly), and promised to make progress before next week. It is very, very, very hard to finish a paper! Especially when all authors are above some seniority, where they spend most of their time with others. I would love to get a lot more personal coding time!

2023-10-23

symmetry day: crossing, permutation

Today's brown-bag talk, by Grant Remmen (NYU), was about (in part) crossing symmetry. This is the symmetry that any Feynman diagram can be rotated through 90 degrees (converting time into space and vice versa) and the interaction will have the same scattering amplitude. This symmetry relates electron–positron annihilation to electron–electron scattering. The symmetry has an important role in string theory, because it is a constraint on any possible fundamental theory. This symmetry has always seemed incredible to me, but it is rarely discussed outside very theoretical circles.

After the talk, and in the Blanton–Hogg group meeting, I brought up things about invariant functions that I learned from Soledad Villar (JHU) that are really confusing me: It is possible (in principle, maybe not in practice) to write any permutation-invariant function of N objects as a function of a sum of universal functions of the N objects (that's proven). How does that relate to k-point functions? Most physicists believe that any k-point function estimate will require a sum over all N-choose-k k-tuples. That's a huge sum, way bigger than a sum over N. What gives? I puzzled some of the mathematical physicists with this and I remain confused.

2023-10-20

Florida, day two

Today was day two of my visit to University of Florida. I had many interesting discussions. One highlight was with Dhruv Zimmerman, who wants to infer big labels (non-parametric functions of time) from small features (a few bands of photometry). That's my kind of problem! We discussed different approaches, and we discussed possible featurizations (or dimensionality reductions) of the labels. I also pitched an information-theoretic analysis. If there's one thing I've learned in the last few years, it is that you shouldn't be afraid to solve problems where there are fewer data than parameters! You just have to structure the problem with eyes wide open.

After many more (equally interesting) discussions, the day ended with Sarah Ballard's group out at a lovely beer garden. We discussed the question: Should students be involved in, and privy to, all the bad things with which we faculty interact as academics, or should we protect students from the bad things? You can imagine my position, since I am all about transparency. But the positions were interesting. Ballard pointed out that in an advisor–student relationship, the student might not feel that they can refuse when the advisor wants to unload their feelings! That power asymmetry is very real. But Ballard's students (Chance, Guerrero, Lam, Seagear) said that they want to understand the bad things too; they aren't in graduate school just to write papers (that comment is for you, Quadry!).

2023-10-19

Florida, day one

I spent today with Sarah Ballard's group, plus others, at the University of Florida. I gave a talk, to a large, lively, and delightful audience. At the end of this talk I was very impressed by the following thing: Ballard had everyone in the room discuss with their neighbors (turn and talk) for about 3 minutes, after the seminar but before the question period began! This is a technique I use in class sometimes; it increases participation. After those 3 minutes, audience members had myriad questions, as one might imagine.

I spoke with many people in the Department about their projects. One highlight was Jason Dittman, who showed me gorgeous evidence that a particular warm exoplanet on an eccentric orbit has an atmosphere that undergoes some kind of phase change at some critical insolation, as it moves away from its host star on its orbit. Crazy!

Late in the day I discussed n-point functions and other cosmological statistics with Zach Slepian and Jiamin Hou. We discussed the plausibility of getting tractable likelihoods for any n-point functions. We also discussed the oddity that n-point functions involve sums over n-star configurations among N stars (N choose n), but there are mathematical results that show that any permutation-invariant function of any point cloud can be expressed with only a sum over stars (N). That sounds like a research problem!

2023-10-18

biases from machine learning

Today I gave a talk (with these slides) at a meeting in Denver for the NSF initiative Harnessing the Data Revolution. I spoke about the necessity and also the dangers of using machine-learning methods in scientific projects. I brought up two very serious possible biases. The first is that if emulators are used to replace simulations, and they can't be easily checked (because the simulation requirements are too expensive), the emulators will lead to a confirmation-bias problem: We will only carefully check the emulations if they lead to results that we don't like! The second bias I raised is that if we perform joint analyses on objects (stars, say) that have been labeled (with ages, say) by a machine-learning regression, there will in general be strong biases in those joint analyses. For example, the average value of 1000 age labels for stars labeled by a standard ML regression will not be anything like an unbiased estimate of the true average age of those stars. These biases are very strong and bad! That said, I also gave many example locations where using machine learning methods is not just okay but actually intellectually correct, in areas of instrument calibration, foregrounds, and other confounders.

The question period was great! We had 25 minutes of questions and answers, which ranged across a very wide set of topics, including statistics, experimental design, and epistemology.

2023-10-17

Bayesian evidence?

Kate Storey-Fisher, Abby Williams, and I spent some time discussing unpublished work that relies heavily on calculations of the Bayesian evidence. Bayesian evidence—what I call the “fully marginalized likelihood”—relates to the volume of the posterior in parameter space. It is generally extremely sensitive to the width of the prior pdf, since if you are comparing two models with different parameterizations, the numbers you get depend on how you normalize or scale out the units of those parameter-space volumes. Indeed, you can get any evidence ratios you want by tuning prior pdf widths. That's bad if you are trying to conclude something, scientifically! Bayesian inference is only principled, imho, when you can quantitatively state the prior pdf that correctly describes your beliefs, prior to seeing the new data. And even then, your evidence is special to you; any other scientist has to recompute from scratch.

2023-10-16

representation of flexible functions

Emily Griffith (Colorado) and I met today to look at replacing a spline interpolation function deep inside some of our code with a Fourier series. The idea is that we need a flexible function of one variable, and we were using a spline of a set of control points, but (for many reasons) we wanted to change to a sum of sines and cosines. The code work was a mess! The small change hits a lot of places inside our model, which is our K-process data-driven nucleosynthetic model. This same problem appears in the new version of wobble by Matt Daunt (NYU). I love flexible functions, but it's hard to implement them in a properly abstracted way. That is, it is hard to write a model so that you can just swap in a Fourier series or a Gaussian process where you used to have an interpolation of control points.

2023-10-13

precision spectroscopy

One research highlight from the day was a conversation with Madeleine MacKenzie (ANU) about many things, including measuring magnesium isotopes in high-quality (high resolution and high SNR) stellar spectra. This comes just after a conversation (yesterday) with Matt Daunt (NYU) saying that he wants to do something with the extremely high-quality stellar spectra produced by the jax-wobble pipeline we are building. So I think there is a project to do. If we measure Mg isotopes, even for a few stars, we might be able to fit them into the 2d model of disk abundances that Emily Griffith (Colorado) and I are building. The model is so simple that we would only need a few stars to learn something interesting—including learn that the isotope ratio variations (seen by MacKenzie) represent some new kind of variability. Do they?

2023-10-02

first-ever Blanton–Hogg group meeting?

Today was not the first-ever Blanton–Hogg group meeting. But it was the first ever for me, since I missed the first two for health and travel reasons. It was great! Tardugno (NYU) showed simulations of gas disks with embedded planets. The planets affect the disk, and the disk causes the planets to interact. Daunt (NYU) showed that his method for inferring (simultaneously) the spectrum, tellurics, and radial velocities in stellar spectra all works. I am stoked! Novara (NYU) showed that he has a bug in his code! But we had a good discussion inspired by that bug about surfaces of section in a real dynamics problem. Gandhi (NYU) showed us a paper with questionable claims about the CMB light passing through galaxy halos?