2020-04-21

what is an attack?

I did some math in a code notebook this morning for Teresa Huang (NYU), related to our project on adversarial attacks. One of our issues or innovations is to describe or evaluate an adversarial attack in a regression (rather than classification) setting. This, I believe, involves comparing how much one can move the labels (the regression output) relative to some kind of expectation: The model is successfully attacked if a small data perturbation leads to a large perturbation in the output label estimate. Large compared to what? I think some kind of prior expectation. Today I worked out how we estimate our prior expectation, given some sparse data from the physical models used to simulate the spectra and produce parameter estimates. We aren't trying to attack the physical models; we are trying to attack machine-learning surrogates for those models, like The Cannon and AstroNN.

No comments:

Post a Comment