University of Cambridge > > Machine Learning @ CUED > Inductive Logic Programming

Inductive Logic Programming

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Machine Learning Tutorial Lecture

Inductive Logic Programming (ILP) is the area of Computer Science which deals with the induction of hypothesised predicate definitions from examples and background knowledge. Logic programs are used as a single representation for examples, background knowledge and hypotheses. ILP is differentiated from most other forms of Machine Learning (ML) both by its use of an expressive representation language and its ability to make use of logically encoded background knowledge. This has allowed successful applications of ILP in areas such as Systems Biology, computational chemistry and Natural Language Processing.

The problem of learning a set of logical clauses from examples and background knowledge has been studied since Reynold’s and Plotkin’s work in the late 1960’s. The research area of ILP has been studied intensively since the early 1990s. This talk will provide an overview of results for learning logic programs within the paradigms of learning-in-the-limit, PAC -learning and Bayesian learning. These results will be related to various settings, implementations and applications used in ILP .

It will be argued that the Bayes’ setting has a number of distinct advantages. Bayes’ average case results are easier to compare with empirical machine learning performance than results from either PAC or learning-in-the-limit. Broad classes of logic programs are learnable in polynomial time in a Bayes’ setting, while corresponding PAC results are largely negative. Bayes’ can be used to derive and analyse algorithms for learning from positive only examples for classes of logic program which are unlearnable within both the PAC and learning-in-the-limit framework. It will be shown how a Bayesian approach can be used to analyse the relevance of background knowledge when learning. General results will also be discussed for expected error given a k-bit bounded incompatibility between the teacher’s target distribution and the learner’s prior.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity