Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > CUED Speech Group Seminars > Reducing Speaker and Temporal Redundancy in Discrete Speech Tokenization

Reducing Speaker and Temporal Redundancy in Discrete Speech Tokenization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Brian Sun.

Discrete speech tokens have emerged as a fundamental representation for various downstream speech processing tasks, particularly in speech generation. However, most existing tokens encode dense, fixed-rate acoustic information, which introduces substantial redundancy and limits their efficiency. In this talk, I will first provide a brief review on the taxonomy of current discrete speech tokens, then present our works exploring the reduction of this information redundancy in two critical directions: (1) Speaker timbre disentanglement, introducing a low-bitrate, single-codebook and speaker-decoupled codec for speech. (2) Variable-rate temporal compression, exploring methods to dynamically adjust the frame rate of discrete tokens for better compactness and bitrate-performance tradeoff. Together, these efforts highlight pathways toward more efficient and controllable discrete speech representations, paving the way for the next generation of speech technologies.

This talk is part of the CUED Speech Group Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity