COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Towards Knowledgeable Foundation Models
Towards Knowledgeable Foundation ModelsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Tiancheng Hu. Abstract: Large language models (LLMs) and vision-language models (VLMs) have demonstrated remarkable performance on knowledge reasoning tasks, owing to their implicit knowledge derived from extensive pretraining data. However, their inherent knowledge bases often suffer from disorganization and illusion, bias towards common entities, and rapid obsolescence. Consequently, LLMs frequently make up untruthful information, exhibit resistance to updating outdated knowledge, or struggle with generalizing across multiple languages. In this talk I will discuss several research directions that aim to make foundation models’ knowledge more accurate, organized, up-to-date and fair: (1) Where and How is Knowledge Stored in LLM ? (2) How to Control LLM ’s Knowledge? (3) How to Update LLM ’s Dynamic Knowledge? (4) How to Bridge the Knowledge Gap between Natural Language and Unnatural Language? Bio: Heng Ji is a professor at Siebel School of Computing and Data Science, and an affiliated faculty member at Electrical and Computer Engineering Department, Coordinated Science Laboratory, and Carl R. Woese Institute for Genomic Biology of University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Founding Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models and Vision-Language Models, and AI for Science. The awards she received include Outstanding Paper Award at ACL2024 , two Outstanding Paper Awards at NAACL2024 , “Young Scientist” by the World Laureates Association in 2023 and 2024, “Young Scientist” and a member of the Global Future Council on the Future of Computing by the World Economic Forum in 2016 and 2017, “Women Leaders of Conversational AI” (Class of 2023) by Project Voice, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013, NSF CAREER award in 2009, PACLIC2012 Best paper runner-up, “Best of ICDM2013 ” paper award, “Best of SDM2013 ” paper award, ACL2018 Best Demo paper nomination, ACL2020 Best Demo Paper Award, NAACL2021 Best Demo Paper Award, Google Research Award in 2009 and 2014, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She served as the associate editor for IEEE /ACM Transaction on Audio, Speech, and Language Processing, and the Program Committee Co-Chair of many conferences including NAACL -HLT2018 and AACL -IJCNLP2022. She was elected as the North American Chapter of the Association for Computational Linguistics (NAACL) secretary 2020-2023. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsHispanic Research Seminars Study Group on a Langlands Correspondence for Loop Groups SciBarOther talksDelivery of 2-minute pitches - one per group A holistic approach to the history of the mathematical sciences in Islamicate societies N things I learned trying to do formal methods in industry Executable specification of a production hypervisor: hypercalls and TLB management discipline Cambridge AI Club for Biomedicine - October 2024 Ethicronics: From West Cambridge to Market Square: An upgrade for software answering hardware (security) challenges? |