COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
CuAI x MLinPL: DeepMindAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact . We are excited to invite you to our first talk of Michaelmas, organised alongside the ML in PL Association! It should be a fantastic event – we are hosting Mateusz Malinowski and Petar Veličković, both senior researchers at Google DeepMind, in the Cambridge Union’s Debating Chamber. The event will begin at 1pm on Saturday 30th October. If you are interested please fill out the registration form (https://cambridge2021.paperform.co/), and see the Facebook event (https://www.facebook.com/events/3071717486404137). AGENDA 1pm – Opening remarks and talk from Petar Veličković 2pm – Talk from Mateusz Malinowski 2:45pm – Break 3pm – Discussion panel 4pm – Closing remarks SPEAKER BIOS Mateusz Malinowski is a Research Scientist at DeepMind. His work concerns computer vision, natural language understanding, reasoning and scalable training. His main contribution is creating foundations and various methods that answer questions about images and proposing a scalable alternative to backprop training mechanism. Mateusz has received a PhD from Max Planck Institute for Informatics and received multiple awards for his contributions to computer vision. Petar Veličković is a Staff Research Scientist at DeepMind, and an Affiliated Lecturer at the University of Cambridge. He holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research concerns geometric deep learning—devising neural network architectures that respect the invariances and symmetries in data (a topic he’s co-written a proto-book about). Within this area, Petar focuses on graph representation learning and its applications in algorithmic reasoning and computational biology. He has published relevant research in these areas at both machine learning venues (NeurIPS, ICLR , ICML-W) and biomedical venues and journals (Bioinformatics, PLOS One, JCB , PervasiveHealth). In particular, he is the first author of Graph Attention Networks—a popular convolutional layer for graphs—and Deep Graph Infomax—a scalable local/global unsupervised learning pipeline for graphs (featured in ZDNet). Further, his research has been used in substantially improving the travel-time predictions in Google Maps (covered by outlets including the CNBC , Endgadget, VentureBeat, CNET , the Verge and ZDNet). This talk is part of the shmh4's list series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsCambridge Science Festival Methodology in design research FWD Soft SolutionOther talksModelling stratified turbulence in the ocean interior Ya Lalla: The Digital Sphere & the Sensory: A Platform on Judeo-Arabic Songs for Birth from the Moroccan Sahara Local yield stress in model amorphous solids New understanding of liquid thermodynamics, viscosity and its lower bounds Why is the suprachiasmatic nucleus such a brilliant circadian time-keeper? The neural correlates of ongoing conscious thought |