University of Cambridge > Talks.cam > Language Technology Lab Seminars > LLM Generalization in Social Context

LLM Generalization in Social Context

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tiancheng Hu.

Abstract:

The successes of large language models (LLMs) have transformed many domains, yet they do not always generalize well across all contexts, particularly in areas where social factors are involved. The talk examines LLM generalizations in social context from three perspectives: assessment, adaptation, and application. We first present a dynamic evaluation protocol based on directed acyclic graphs with varying complexity for assessing LLMs on many types of reasoning tasks. Then we explore how to adapt LLMs to be more socially generalizable by building culturally aware language technologies with an online-community driven knowledge base. Lastly, we discuss how to customize LLMs for social skill training in a variety of social contexts. Overall, we hope to provide insights into how LLM generalizes in social contexts and how to develop socially intelligent LLMs.

Bio:

Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences.

This talk is part of the Language Technology Lab Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity