University of Cambridge > Talks.cam > CUED Speech Group Seminars > SpeechBrain: Unifying Speech Technologies and Deep Learning With an Open Source Toolkit

SpeechBrain: Unifying Speech Technologies and Deep Learning With an Open Source Toolkit

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Dr Jie Pu.

This talk will be on zoom

Abstract: SpeechBrain is a novel open-source speech toolkit natively designed to support various speech and audio processing applications. It currently supports a large variety of tasks, such as speech recognition, speech translation, speaker recognition, speech enhancement, speech separation, spoken language understand, multi-microphone signal processing, just to name a few. With this presentation, we will navigate throughout all the design choices, core details and simple examples making of SpeechBrain the best speech toolkit out there for open, accessible, replicable and transparent speech technologies.

Bio: Titouan Parcollet is an associate professor in computer science at the Laboratoire Informatique d’Avignon (LIA), from Avignon University (FR) and a visiting scholar at the Cambridge Machine Learning Systems Lab from the University of Cambridge (UK). Previously, he was a senior research associate at the University of Oxford (UK) within the Oxford Machine Learning Systems group. He received his PhD in computer science from the University of Avignon (France) and in partnership with Orkis focusing on quaternion neural networks, automatic speech recognition, and representation learning. His current work involves efficient speech recognition, federated learning and self-supervised learning. He is also currently collaborating with the Mila-Quebec AI institute as a co-leader of the SpeechBrain project.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2022 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity