University of Cambridge > > id366's list > Sign and Basis Invariant Networks for Spectral Graph Representation Learning

Sign and Basis Invariant Networks for Spectral Graph Representation Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Iulia Duta.

Eigenvectors computed from data arise in various scenarios including principal component analysis, and matrix factorizations. Another key example is the eigenvectors of the graph Laplacian, which encode information about the structure of a graph or manifold. An important recent application of Laplacian eigenvector is to graph positional encodings, which have been used to develop more powerful graph architectures. However, eigenvectors have symmetries that should be respected by models taking eigenvector inputs: (i) sign flips, since if v is an eigenvector then so is -v; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors. We introduce SignNet and BasisNet—-new neural network architectures that are sign and basis invariant. We prove that our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the desired invariances. Moreover, when used with Laplacian eigenvectors, our architectures are provably expressive for graph representation learning: they can approximate—and go beyond—any spectral graph convolution, and can compute spectral invariants that go beyond message passing neural networks. Experiments show the strength of our networks for molecular graph regression, learning expressive graph representations, and more.

This talk is part of the id366's list series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity