University of Cambridge > Talks.cam > Accelerate Lunchtime Seminar Series > On Activation and Normalization Layers in Graph Neural Networks

On Activation and Normalization Layers in Graph Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Sam Nallaperuma-Herzberg.

Graph Neural Networks (GNNs) have attracted growing interest in recent years, with most work emphasizing model expressiveness and architectural innovations. By contrast, essential components such as activation and normalization layers remain largely unexplored, often defaulting to ReLU and BatchNorm. In our two NeurIPS 2024 papers, we investigate the effect of diverse activation and normalization functions on GNN performance and introduce two novel, task- and graph-adaptive layers—DiGRAF [1] and GRANOLA [2]. We present the theoretical foundations and design motivations for these layers and validate their practical benefits through extensive experiments on a broad suite of graph-learning benchmarks. Our results demonstrate that DiGRAF and GRANOLA consistently outperform conventional alternatives, highlighting the critical role of adaptive activation and normalization in advancing GNN performance.

This talk is part of the Accelerate Lunchtime Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity