I am trying to prepare slides for a new talk (meaning: no recycled slides!) that I will be giving at MIT in the coming week. I am trying to boil down what's so troubling about machine learning in the natural sciences. I realized that what's so troubling (to me) is that the only standard of truth for a machine-learning model is how it performs on a finite set of held-out test data (usually drawn from the same distribution as the training data). That is nothing like the standard of truth for scientific models or theories! So, for example, a successful adversarial attack against a method is seen as a strange feature, when it comes to industrial machine-learning contexts. But I think that a successful adversarial attack completely invalidates a machine-learning method used in (many contexts in) natural science: It shows that the model isn't doing what we expect it to be doing. I hope to be able to articulate all this before Tuesday.
Hope you'll post the slides!
ReplyDeleteI've attended your Bayesian Workshop many, many years ago in Heidelberg (it was excellent but I was too shy to talk to you!). Left astrophysics about a decade ago for data science / ML.
Few months ago, I decided to look up people I admired and found your blog! :)