adversarial approaches to everything

Today's parallel-working session at NYU was a dream. Richard Galvez (NYU) is working with Rob Fergus (NYU) to train a generative adversarial network on images of galaxies. One issue with these GANs is that a GAN can do well making fake data in a subspace of the whole data space, and still do well, adversarially. So Galvez is using a clustering (k-means) in the data space, and looking at the populations of the clusters in the true data and in the generated data, to see that coverage is good. This is innovative, and important if we are going to use these GANs for science.

Kate Storey-Fisher (NYU) is making something like adversarial (there's that word again) mock catalogs for large-scale structure projects: She is going to make the selection function in each patch of the survey a nonlinear function of the housekeeping data (point-spread function, stellar density, transparency, season, and so on) we have for that patch. Then we can see what LSS statistics are robust to the crazy. These mocks will be adversarial in the sense that they will represent a universe that is out to trick us, while GANs are adversarial in the sense that they use an internal competitive game for training.

And as I was explaining why I am disappointed with the choices that LSST has made for broad-band filters, Alex Malz (NYU) and I came up with an inexpensive and executable proposal that would satisfy me and improve LSST. It involves inexpensive and easy-to-make stochastically ramped filters. I don't think there is an iceball's chance in hell that the Collaboration would even for a moment consider this plan, but the proposal is a good one. I guess this is adversarial in a third sense!

No comments:

Post a Comment