BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Neural Sequence Models for Mathematical Reasoning - Yuhuai(Tony) W
 u\, Stanford University & Google
DTSTART:20220614T150000Z
DTEND:20220614T160000Z
UID:TALK171794@talks.cam.ac.uk
CONTACT:Mateja Jamnik
DESCRIPTION:"Join us on Zoom":https://zoom.us/j/99166955895?pwd=SzI0M3pMVE
 kvNmw3Q0dqNDVRalZvdz09\n\nAdvanced mathematical reasoning is unique in hum
 an intelligence\, and it is a fundamental building block for many intellec
 tual pursuits and scientific developments. In this talk\, I will describe 
 generic approaches to mathematical reasoning using neural sequence models.
  Unlike previous approaches with highly constrained action spaces\, sequen
 ce models allow the prover to freely synthesize arbitrary proof steps\, in
 cluding the synthesis of conjectures\, lemmas\, and definitions. We create
 d a tool that can help formal mathematicians prove theorems in Lean\, and 
 our model made contributions accepted by professional mathematicians\, add
 ing new proofs to the Lean Mathlib library. The aforementioned work highli
 ghts the power of modern sequence models for automated theorem proving. Ho
 wever\, there are still major reasoning capabilities that are missing. Fir
 st\, for reasoning tasks such as theorem proving or program synthesis\, on
 e must be able to work with large and continuously changing theorem databa
 ses and code repositories. I introduced a new transformer architecture tha
 t can memorize the internal representations of past inputs\, allowing the 
 utilization of newly added facts or codes without the need for retraining.
  Second\, reasoning is often the result of extended chains of thought\, wh
 ile current models mostly perform fast\, intuitive pattern matching. I int
 roduced a learning algorithm that enables models to generate explicit rati
 onales when tackling challenging reasoning problems\, by bootstrapping fro
 m a pre-trained large language model. I will conclude my talk with future 
 research directions towards the goal of building a strong mathematical rea
 soning model.\n\n*Bio:*\n\nYuhuai Tony Wu is a Postdoctoral Scholar at Sta
 nford University working with Percy Liang and Jay McClelland\, and a Resea
 rch Scientist at Google. His long-term research interest is to build a mac
 hine that can automate mathematics. Towards this goal\, his current resear
 ch focuses on building fundamental reasoning architectures as well as impr
 oving the reasoning capabilities of large language models. During his Ph.D
  at University of Toronto\, he was awarded a Google PhD Fellowship. \n\n
LOCATION:Zoom
END:VEVENT
END:VCALENDAR
