O. M. G. As my loyal reader knows, Soledad Villar (JHU) and I are trying to build a replacement for convolutional neural networks that can handle geometric objects (scalars, vectors, pseudovectors, tensors of any order, so on) and that can create functions that exactly (or approximately if you like) obey the symmetries of classical physics (rotation, translation, parity, boost, maybe even gauge). Our method produces polynomial functions of images (functions where both the input and the output are images), making use of convolutions, outer products, index contractions, index permutations, cross-products, pooling, and so on.

Meanwhile, Ben Blum-Smith (NYU, JHU) has been doing (scary to me) group theory stuff in which he has been computing the number of unique polynomial functions of images of fixed polynomial degree that are possible, given image inputs and outputs (of some tensor orders), when those polynomial functions obey the symmetries of classical physics. And he has results! He can tell us how many unique linear and quadratic, say, functions there are of vector images (say) that output vector images. It's a formula that depends on the image size and the degree of the polynomial.

Today Villar and I had a breakthrough: We used our geometric generalization of convolutions to produce all possible linear functions of small images, deduplicated the results using a singular value decomposition, and (thereby) counted all linearly independent group-equivariant linear functions there are that go from vector images to vector images. And *our results agree with Blum-Smith's formula*. So we may actually have a complete basis for all image functions that could ever exist in the context of classical physics?