Tuesdays are light on research! However, I did get a chance today to pitch an idea to Soledad Villar (NYU) about generative adversarial networks (GANs). In Houston a couple of weeks ago she showed results that use a GAN as a regularizer or prior to de-noise noisy data. But she had trained the GAN on noise-free data. I think you could even train this GAN on noisy data, provided that you follow the generator with a noisification step before it hands the output to the discriminator. In principle, the GAN should learn the noise-free model from the noisy data in this case. But I don't understand whether there is enough information. We discussed and planned a few extremely simple experiments to test this.
In the NYU Astro Seminar, Jia Liu (Princeton) spoke about neutrino masses. Right now the best limits on the total neutrino mass is from large-scale structure (although many particle physicists are skeptical because these limits involve the very baroque cosmological model, and not just accelerator and detector physics). Liu has come up with some very clever observables in cosmology (large-scale structure) that could do an even better job of constraining the total neutrino mass. I asked what is now one of my standard questions: If you have a large suite of simulations with different assumptions about neutrinos (she does), and a machinery for writing down permitted observables (no-one has this yet!), you could have a robot decide what are the best observables. That is, you could use brute force instead of cleverness, and you might do much, much better. This is still on my to-do list!