University of Cambridge > > Hardware for Machine Learning  > Deploying Deep Neural Networks in the Embedded Space

Deploying Deep Neural Networks in the Embedded Space

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Robert Mullins.

Deep Neural Networks (DNNs) have emerged as the dominant model across various AI applications. In the era of IoT and mobile systems, the efficient deployment of DNNs on embedded platforms is vital to enable the development of intelligent applications. In this talk I will present our recent work on the optimised mapping of DNNs on embedded settings, focusing on DNN -to-accelerator toolflows, and discussing the challenges but also the opportunities in applying reconfigurable computing for the realisation of intelligent embedded systems.

Dr Christos Bouganis is Reader in Intelligent Digital Systems at Imperial College London, and he leads the Intelligent Digital Systems group (iDSL), a team of researchers investigating FPGA -based computing for machine learning, computer vision, and image processing. He serves on several TPCs including FCCM , FPL, and FPT , and is a member of the editorial board of IEEE Transactions on Image Processing, IET Computers & Digital Techniques, and Journal of Systems Architecture.

This talk is part of the Hardware for Machine Learning series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity