COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Inductive Logic ProgrammingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Zoubin Ghahramani. Machine Learning Tutorial Lecture Inductive Logic Programming (ILP) is the area of Computer Science which deals with the induction of hypothesised predicate definitions from examples and background knowledge. Logic programs are used as a single representation for examples, background knowledge and hypotheses. ILP is differentiated from most other forms of Machine Learning (ML) both by its use of an expressive representation language and its ability to make use of logically encoded background knowledge. This has allowed successful applications of ILP in areas such as Systems Biology, computational chemistry and Natural Language Processing. The problem of learning a set of logical clauses from examples and background knowledge has been studied since Reynold’s and Plotkin’s work in the late 1960’s. The research area of ILP has been studied intensively since the early 1990s. This talk will provide an overview of results for learning logic programs within the paradigms of learning-in-the-limit, PAC -learning and Bayesian learning. These results will be related to various settings, implementations and applications used in ILP . It will be argued that the Bayes’ setting has a number of distinct advantages. Bayes’ average case results are easier to compare with empirical machine learning performance than results from either PAC or learning-in-the-limit. Broad classes of logic programs are learnable in polynomial time in a Bayes’ setting, while corresponding PAC results are largely negative. Bayes’ can be used to derive and analyse algorithms for learning from positive only examples for classes of logic program which are unlearnable within both the PAC and learning-in-the-limit framework. It will be shown how a Bayesian approach can be used to analyse the relevance of background knowledge when learning. General results will also be discussed for expected error given a k-bit bounded incompatibility between the teacher’s target distribution and the learner’s prior. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsUK~IRC Summit Extraordinary Category Theory SeminarOther talksAre hospital admissions for people with palliative care needs avoidable and unwanted? Behavioural phenotypes of children born preterm: what we know and future research avenues Coordination and inequalities in agglomeration payments: evidence from a laboratory experiment PROFESSIONAL REGISTRATION WORKSHOP DataFlow SuperComputing for BigData 'Cryptocurrency and BLOCKCHAIN – PAST, PRESENT AND FUTURE' The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age The role of myosin VI in connexin 43 gap junction accretion Inferring the Evolutionary History of Cancers: Statistical Methods and Applications |