2023-11-14

conjectures about pre-training

On Monday of this week, Shirley Ho (Flatiron) gave a talk at NYU in which she mentioned the unreasonable effectiveness of pre-training a neural network: If, before you train your network on your real (expensive, small) training data, you train it on a lot of (cheap, approximate) pre-training data, you get better overall performance. Why? Ho discussed this in the context of PDE emulation: She pre-trains with cheap PDEs and then trains on expensive PDEs and she gets way better performance than she does if she just trains on the expsensive stuff.

Why does this work? One interesting observation is that even pre-training on cat videos helps with the final training! Ho's belief is that the pre-training gets the network understanding time continuity and other smoothness kinds of things. My conjecture is that the pre-training teaches the network about (approximate) diffeomorphism invariance (coordinate freedom). The cool thing is that these conjectures could be tested with interventions!

No comments:

Post a Comment