COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Pizza & AI June 2019Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins. Please note, this event may be recorded. Microsoft will own the copyright of any recording and reserves the right to distribute it as required. Speaker 1 – Andrew Fitzgibbon Title – Big data, small data, oddly-shaped data: Welcome to “All Data” AI Abstract – I’m happy with the term “AI” — it just means doing cool stuff with data. We’ve seen great successes with computer vision, natural language processing, and a host of other applications. However, I’m not so happy when we shoehorn every problem into a BxWxHxC block of numbers to fit the constraints of GPU hardware. As the new head of the All Data AI (ADA) group at Microsoft Cambridge, I’m excited by a future where we can apply AI in traditional “big data” scenarios, in “small data” scenarios where we need to learn fast from limited examples, in the crossover area where we may have millions of related subproblems, each data-poor, but jointly data-rich. I’m excited to apply AI to structured data like graphs, molecules, program code. And I’ll talk about the compounding of excitement that results from applying these techniques to shipping products that impact millions of real users. Speaker 2 – John Bronskill Title – Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes Abstract – This talk will describe our recent work on designing image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. I will introduce an approach that relates to existing approaches to meta-learning and so-called conditional neural processes, generalising them to the multi-task classification setting. The resulting approach, called Conditional Neural Adaptive Processes (CNAPS), comprises a classifier whose parameters are modulated by an adaptation network that takes the current task’s dataset as input. I will show that CNAPS achieves state-of-the-art results on the challenging Meta-Dataset few-shot learning benchmark indicating high-quality transfer-learning which is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPS is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, I will show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning. This talk is part of the AI+Pizza series. This talk is included in these lists:Note that ex-directory lists are not shown. |
Other listsCuadrado Behavioural and Clincial Neuroscience Seminars Leverhulme Centre for the Future of IntelligenceOther talksLegacies of loss: The intergenerational outcomes of slaveholder compensation in the British Cape Colony Insect conservation: conserving the little things that run the world. Systems Cardiology of Heart Failure Addressing health: sickness and retirement in the Victorian Post Office Topology optimization of modulated and oriented periodic microstructures by the homogenization method in 2-d and in 3-d Mysteries of Modern Physics |