Today Soledad Villar (JHU) and I went back to the idea of adversarial attacks against linear regressions. But this time we looked at the training-set side, meaning: Can you attack the method with adversarial training data. One idea: What is the worst single data point you can add to the training set to distort the results? It looks to me like ordinary least squares is incredibly (arbitrarily) sensitive to this kind of attack if the dimensions of the data (the number of parameters) exceeds the size of the training set. This kind of sensitivity to attack somehow mitigates against the idea that more parameterized models are better (which is the thinking in machine learning these days).
No comments:
Post a Comment