2021-03-31

a FMM GNN?

Today Soledad Villar (JHU) and I discussed the posssibility of building something akin to a graph neural network, but that takes advantage of the n log(n) scaling of a fast multipole method hierarchical summary graph. The idea is to make highly connected or fully connected graph neural networks fast through the same trick that the FMM works: By having nearby points in the graph talk precisely, but have distant parts talk through summaries in a hierarchical set of summary boxels. We think there is a chance this might work, in the context of the work we are doing with Weichi Yao (NYU) on gauge-invariant graph neural networks. The gauge invariance is such a strict symmetry, it might permit transmitting information from distant parts of the graph through summaries, while still preserving full (or great) generality. We have yet to figure it all out, but we spent a lot of time drawing boxes on the board.

No comments:

Post a Comment