University of Cambridge > Talks.cam > Foundation AI > Positional encodings in LLMs

Positional encodings in LLMs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio.

Positional encodings are essential for transformer-based language models to understand sequence order, yet their influence extends far beyond simple position tracking. This talk explores the landscape of positional encoding methods in LLMs and reveals surprising insights about how these architectural choices shape model behavior.

We begin with the fundamental challenge: why attention mechanisms require explicit positional information. We then survey the evolution of encoding strategies, from sinusoidal approaches to modern techniques like RoPE, examining their architectural implications and trade-offs.

The talk delves into how these different encoding strategies fundamentally shape model architectures and representations. We analyze the specific limitations and trade-offs of each approach, examining how positional information propagates through transformer layers and influences the learned representations.

Watch it remotely

This talk is part of the Foundation AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity