University of Cambridge > > Artificial Intelligence Research Group Talks (Computer Laboratory) > Work in progress: Diving deeper into building distributed representations graphs

Work in progress: Diving deeper into building distributed representations graphs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mateja Jamnik.

A fundamental prerequisite for machine learning algorithms to learn about input data is the ability to discern one observation from another. In most cases this requires explicit transformation of observations into feature vector representations that may be used as inputs into machine learning algorithms. The transformation of an observation to another form (and from stage to other representations) for input into learning systems is a crucial stage which incorporates various assumptions about we have made about the observations and the desired behaviour of the learning system. This talk will focus on building representations of graphs, starting from an assumption we make about data, interpreting this assumption and formulating a system to learning distributed representations of graphs with popular neural embedding methods. From there I will dive deeper into what the neural embedding method is doing and characterising the construction of such embeddings based on the association between a graph and its induced substructures.

This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity