2022-07-29

uncertainty propagation for a neural network

Today Matthias Samland (Stockholm) gave a nice Königstuhl Colloquium at MPIA about direct imaging of exoplanets with high-contrast imaging. He showed some beautiful results from ESO Gravity and from NASA JWST. One of his main take-away points is that the situation is changing fast, and we might achieve very much higher contrast ratios in the near future than we've ever had, and thus get many more planets.

I spent some time late in the day looking at uncertainty propagation for neural networks: Given that you can optimize a NN, and given that it makes good predictions for held-out data, and given that you can take all derivatives of everything with respect to everything, does that mean you can propagate errors or noise from the data to the results? I think the answer is yes in a limited sense: You can see how the output depends on the input at the training step. But what you can't do—and probably will never be able to do—is propagate the uncertainties that come from your training set (the uncertainties in your weights, as it were). And these uncertainties can be very large, especially since the models tend to be enormously over-parameterized, and also contain combinatorially large exact and near-exact degeneracies. (I think maybe the near-exact degeneracies are worse than the exact ones.) I vaguely recall Tom Charnock making strong statements about all these things at Ringberg.

No comments:

Post a Comment