Weighted Finite-state Automata
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact David Duvenaud.
A weighted finite-state automaton assigns weights to strings of discrete symbols. The formalism is a useful way of defining a probability distribution over strings of potentially unbounded length. By defining the automaton over two strings at the same time, probabilistic relations between two sequences (e.g. conditional distributions) can be defined. I will show how inference on this type of model works, and how its parameters can be learnt. Another use of the formalism that I hope to discuss is to define kernels between two strings.
Background material (I will not expect anyone to have read this):
Mehryar Mohri.
Finite-State Transducers in Language and Speech Processing.
Computational Linguistics, 23:2, 1997.
http://www.cs.nyu.edu/mohri/postscript/cl1.ps
Jason Eisner.
Parameter estimation for probabilistic finite-state transducers.
Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 2002.
http://www.cs.jhu.edu/jason/papers/eisner.acl02-fst.pdf
Corinna Cortes, Patrick Haffner, Mehryar Mohri.
Rational Kernels: Theory and Algorithms.
Journal of Machine Learning Research (JMLR), vol. 5, 2004.
http://www.cs.nyu.edu/~mohri/postscript/jmlr.pdf
This talk is part of the Machine Learning Reading Group @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|