University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Hypergraph Factorisation for Multi-Tissue Gene Expression Imputation

Hypergraph Factorisation for Multi-Tissue Gene Expression Imputation

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

NOTE: Room changed, now Lecture Theatre 1

Integrating gene expression across tissues is crucial for understanding the coordinated biological mechanisms that drive disease and characterise homeostasis. However, traditional multi-tissue integration methods cannot handle uncollected tissues or rely on genotype information, which is often unavailable and subject to privacy concerns. To address these challenges, we present HYFA (Hypergraph Factorisation), a parameter-efficient graph representation learning approach for joint imputation of multi-tissue and cell-type gene expression. HYFA represents multi-tissue gene expression in a hypergraph of individuals, metagenes, and tissues, and learns factorised representations via a custom message passing neural network operating on the hypergraph. HYFA supports a variable number of reference tissues, increasing the statistical power over single-tissue approaches, and incorporates inductive biases to exploit the shared regulatory architecture of tissues and genes. In performance comparison, HYFA attains improved performance over TEE BoT and standard imputation methods across a broad range of tissues from the Genotype-Tissue Expression (GTEx) project. In post-imputation analysis, application of expression Quantitative Trait Loci (eQTL) mapping on the fully-imputed GTEx data yields a substantial increase in number of detected replicable eQTLs.

\

You can also join us on Zoom

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity