2022-02-15

dimensional scalings improve predictions

As my loyal reader knows, I have been working on the possibility that machine-learning or regression methods could be sensitive to dimensions or units and thus get better generalization and so forth. Today we had a success! Weichi Yao (NYU) has been converting the model in this paper on geometric methods to a model that respects dimensional scalings (or dimensional symmetry, or units equivariance). It hasn't been doing better than non-dimemsional-symmetric versions, in part (we think) because the dimensionless invariants we produce have worse conditioning properties than the raw labels. But Soledad Villar (JHU) had a good idea: Let's test on data that are outside the distribution used to generate the training set! That worked beautifully today, and we have far better predictions for the out-of-sample test points than the competitive methods. Why? Because the dimensional scalings guarantee certain aspects of generalization.

No comments:

Post a Comment