COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Artificial Intelligence Research Group Talks (Computer Laboratory) > Work in progress: Making efficient use of language models for theorem proving
Work in progress: Making efficient use of language models for theorem provingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Mateja Jamnik. Language models are neural networks trained on broad data and adaptable to a wide range of downstream tasks. They have shown impressive capabilities in domains such as NLP , vision, robotics, and reasoning. For mathematical reasoning, several SOTA results have been achieved by language models, but the huge computational cost prevents them from a wider adoption in the formal mathematics community. We introduce two methods to make efficient use of language models for theorem proving – one based on augmenting the language model prompt with previous proof steps and the other based on sampling output candidates of larger quantity and diversity. Our model improved the state-of-the-art theorem proving success rate on the Archive of Formal Proofs from 33.2% to 39.6%, without incurring any further computational cost. This talk is part of the Artificial Intelligence Research Group Talks (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsflashacademy talks India in the Global Age The Shrinking Commons Symposium: Plenary LecturesOther talksTraditional Medicine Goes Global: Pan-African Precedents, Cultural Decolonization, and Cold War Rights/Properties’ Cambridge - Corporate Finance Theory Symposium 2022 Fractional diffusion-advection-asymmetry equation Architecture and Spaces of Healing Covid and Cognition Reconsidering the relationship between personality and politics |