COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Accelerated Free-Form Model Discovery of Interpretable Models using Small Data
Accelerated Free-Form Model Discovery of Interpretable Models using Small DataAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact INI IT. VMVW02 - Generative models, parameter learning and sparsity The ability to abstract the behavior of a system or a phenomenon and distill it into a consistent mathematical model is instrumental for a broad range of applications. Historically, models were manually derived in a first principles fashion. The first principles approach often offers the derivation of interpretable models of remarkable levels of universality using little data. Nevertheless, their derivation is time consuming and relies heavily upon domain expertise. Conversely, with the rising pervasiveness of data-driven approaches, the rapid derivation and deployment of models has become a reality. Scalability is gained through dependence upon exploitable structure (functional form). Such structures, in turn, yield non-interpretable models, require Big Data for training, and provide limited predictive power outside the training set span. In this talk, we will introduce an accelerated model discovery approach that attempts to bridge between the two conducts, to enable the discovery of universal, interpretable models, using Small Data. To accomplish that, the proposed algorithm searches for free-form symbolic models, where neither the structure nor the set of operator primitives are predetermined. The discovered models are provably globally optimal, promoting superior predictive power for unseen input. Demonstration of the algorithm in re-discovery of some fundamental laws of science will be provided, and references to on-going work in the discovery of new models for, hitherto, unexplainable phenomena. Globally optimal symbolic regression, NIPS Interpretable ML Workshop, 2017, https://arxiv.org/abs/1710.10720 Globally optimal Mixed Integer Non-Linear Programming (MINLP) formulation for symbolic regression, IBM Technical Report ID 219095 , 2016 This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCombined TCM Seminars and TCM blackboard seminar listing Clare Hall TalksOther talksMRI in large animals: a new imaging model A passion for pottery: a photographer’s dream job Streptococcus suis - managing a global zoonotic pathogen of pigs Introduction to the early detection of cancer and novel interventions CANCELLED DUE TO STRIKE ACTION Recent advances in understanding climate, glacier and river dynamics in high mountain Asia |