Structural Learning of Dynamic Bayesian Networks
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla.
Dynamic Bayesian Networks allow complex systems of multiple random variables to be modelled as they vary over time.
The conditional dependencies of the variables within a time slice, and between two adjacent slices are encoded in the structure
of the prior and transition networks respectively. This talk introduces the problem of learning these structures automatically from
data. The problem is particularly complicated by the possible existence of missing data, and hidden variables. A technique for
learning structures with hidden variables by Boyen, Friedman and Koller is presented. The essential observation is that the omission
of hidden variables in a Dynamic Bayesian Network can lead to violations of the Markov assumption.
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|