computing stable derivatives

In my science time today, I worked with Ana Bonaca (Harvard) on her computation of derivatives—of stellar stream properties with respect to potential parameters. This is all part of our information-theoretic project on stellar streams. We are taking the derivatives numerically, which is challenging to get right, and we have had many conversations about step sizes and how to choose them. We made (what I hope are) final choices today: They involve computing the derivative at different step sizes, comparing each of those derivatives to those computed at nearby step sizes, and finding the smallest step size at which converged or consistent derivatives are being computed. Adaptive and automatic! But a pain to get working right.

Numerical context: If you take derivatives with step sizes that are too small, you get killed by numerical noise. If you take derivatives with step sizes that are too large, the changes aren't purely linear in the stepped parameter. The Goldilocks step size is not trivial to find.


  1. Couldn't this be more easily done with TensorFlow?

    1. Yes! As long as you don't mind implementing a full n-body simulation in TensorFlow!