University of Cambridge > Talks.cam > Computer Laboratory Wednesday Seminars > Haar Graph Pooling

Haar Graph Pooling

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact jo de bono.

Deep Graph Neural Networks (GNNs) are useful models for graph classification and graph-based regression tasks. In these tasks, graph pooling is a critical ingredient by which GNNs adapt to input graphs of varying size and structure. We propose a new graph pooling operation based on compressive Haar transforms—- HaarPooling. HaarPooling implements a cascade of pooling operations; it is computed by following a sequence of clusterings of the input graph. A HaarPooling layer transforms a given input graph to an output graph with a smaller node number and the same feature dimension; the compressive Haar transform filters out fine detail information in the Haar wavelet domain. In this way, all the HaarPooling layers together synthesise the features of any given input graph into a feature vector of uniform size. Such transforms provide a sparse characterisation of the data and preserve the structure information of the input graph. GNNs implemented with standard graph convolution layers, and HaarPooling layers achieve state-of-the-art performance on diverse graph classification and regression problems.

This talk is based on the joint works with Yanan Fan (UNSW), Junbin Gao (U Sydney), Ming Li (ZJNU), Pietro Lio (Cambridge), Zheng Ma (Princeton), Guido Montufar (UCLA), Xuebin Zheng (U Sydney), Bingxin Zhou (U Sydney), Xiaosheng Zhuang (CityU HK).

The recording of the seminar can be found at:

https://www.cl.cam.ac.uk/seminars/wednesday/video/lt2-201028-wed-1500-153532.html

Talk slides available to download at https://web.maths.unsw.edu.au/~yuguangwang/talks/Haar_graph_pool_Yuguang.pdf

This talk is part of the Computer Laboratory Wednesday Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2021 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity