![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
Positional encodings in LLMsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Pietro Lio. Positional encodings are essential for transformer-based language models to understand sequence order, yet their influence extends far beyond simple position tracking. This talk explores the landscape of positional encoding methods in LLMs and reveals surprising insights about how these architectural choices shape model behavior. We begin with the fundamental challenge: why attention mechanisms require explicit positional information. We then survey the evolution of encoding strategies, from sinusoidal approaches to modern techniques like RoPE, examining their architectural implications and trade-offs. The talk delves into how these different encoding strategies fundamentally shape model architectures and representations. We analyze the specific limitations and trade-offs of each approach, examining how positional information propagates through transformer layers and influences the learned representations. This talk is part of the Foundation AI series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsDying For Life CPGJ Speaker Series on Political Practice Type the title of a new list hereOther talksWill mathematical theorem proving be solved by scaling laws? Chile (part 1) Title TBC Secure active causal dataset acquisition Ana Patricia Ramos-Forming an Eye: from cell behaviour to tissue shape changes; Kumud Saini-Temperature Regulation of Cell Cycle and Growth Dynamics in Arabidopsis Transmissable cancers |