University of Cambridge > Talks.cam > NLIP Seminar Series > The Past, Present and Future of Tokenization

The Past, Present and Future of Tokenization

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Suchir Salhan.

Abstract:

Current large language models (LLMs) predominantly use subword tokenization. They see text as chunks (called “tokens”) made up of individual words, or parts of words. This has a number of consequences. For example, LLMs often struggle with seemingly simple tasks involving character-level knowledge, like counting the number of letters in a word or comparing two numbers. Subword tokenization can also lead to discrepancies across languages: processing English text with an LLM is often cheaper than processing text in other languages. We will talk about how these issues came to be, as well as how to potentially improve tokenization by moving away from subwords (e.g., to models directly ingesting bytes) and/or towards more adaptive, modular, tokenization. Finally, we will conclude with discussing the far reach of tokenization into seemingly unrelated fields (model merging and multimodality).

Speaker Biography: Benjamin Minixhofer is a PhD student in the Language Technology Lab, interested in multilinguality, tokenization and language emergence.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity