University of Cambridge > Talks.cam > HEP phenomenology joint Cavendish-DAMTP seminar > Multi-scale cross-attention transformer encoder for event classification

Multi-scale cross-attention transformer encoder for event classification

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Benjamin Christopher Allanach.

We deploy an advanced Machine Learning (ML) environment, leveraging a multi-scale cross-attention encoder for event classification, taking gg→H→hh→bbbb process at the High Luminosity Large Hadron Collider (HL-LHC) as an example. In the boosted Higgs regime, the final state consists of two fat jets. Our multi-modal network can extract information from the jet substructure and the kinematics of the final state particles through self-attention transformer layers. The learned information is subsequently integrated to improve classification performance using an additional transformer encoder with cross-attention heads. We demonstrate that our approach surpasses in performance current alternative ML methods, whether solely based on kinematic analysis or else on a combination of this with mainstream ML approaches. Then, we employ various interpretive methods to evaluate the network results, including attention map analysis and visual representation of Gradient-weighted Class Activation Mapping (Grad-CAM). The proposed network is generic and can be applied to analyse any process carrying information at different scales.

This talk is part of the HEP phenomenology joint Cavendish-DAMTP seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity