COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Logic and Semantics Seminar (Computer Laboratory) > Approximating Labelled Markov Processes by Averaging
Approximating Labelled Markov Processes by AveragingAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Bjarki Holm. Markov processes with continuous state spaces or continuous time evolution or both arise naturally in many areas of computer science: robotics, performance evaluation, modelling and simulation for example. For discrete systems there was a pioneering treatment of probabilisitic bisimulation and logical characterizationdue to Larsen and Skou [LS91]. The continuous case, however, was neglected for a time. For a little over a decade there has been significant activity among computer scientists as it came to be realized that ideas from process algebra, like bisimulation and the existence of a modal characterization, would be useful for the study of such systems. In [BDEP97] continuous-state Markov processes with labels to capture interactions were christened labelled Markov processes (LMPs). There is a vast literature on timed systems, hybrid systems, robotics and control theory that also refer to such systems. In [DGJP00, DGJP03 ] a theory of approximation for LMPs was initiated and was refined and extended in [DD03, DDP03 ]. Finding finite approximations is vital to give a computational handle on such systems. These techniques were adapted to Markov decision processes (MDPs) and applied to find good estimates of value functions [FPP05]. The previous work was characterized by rather intricate proofs that did not seem to follow from basic ideas in any straightforward way. For example, the logical characterization of (probabilistic) bisimulation proved first in [DEP98] requires subtle properties of analytic spaces and rather painful constructions [Eda99]. Proofs of basic results in approximation theory also seemed to be more difficult than they should be. In recent work we take an entirely new approach, in some ways “dual” to the normal view of probabilistic transition systems. We think of a Markov process as a transformer of functions defined on it, rather than as a transformer of the state. Thus, instead of working directly with a Markov kernel \tau(s,A) that takes a state s to a probability distribution over the state space, we think of a Markov process as transforming a function f into a new function \int f(s’)\tau(s, ds’) over the state space. This is the probabilistic analogue of working with predicate transformers, a point of view advocated by Kozen [Koz85] in a path-breaking early paper on probabilistic systems and logic. This new way of looking at things leads to three new results:
There are a number of interesting categorical aspects to this work which I will emphasize in the talk. This is joint work with Philippe Chaput, and with Vincent Danos and Gordon Plotkin. References
This talk is part of the Logic and Semantics Seminar (Computer Laboratory) series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsArts, Culture and Education All Biological Anthropology Seminars and Events rp587Other talksAre hospital admissions for people with palliative care needs avoidable and unwanted? Energy landscape of multivariate time series data Designer Babies or Children of Frankenstein? Genome Editing and its Side Effects The semantics and pragmatics of racial and ethnic language: Towards a comprehensive radical contextualist account Dynamical large deviations in glassy systems Number, probability and community: the Duckworth-Lewis-Stern data model, Monte Carlo simulations and counterfactual futures in cricket Horizontal transfer of antimicrobial resistance drives multi-species population level epidemics Single Cell Seminars (November) How to Deploy Psychometrics Successfully in an Organisation Thermodynamics de-mystified? /Thermodynamics without Ansätze? Coin Betting for Backprop without Learning Rates and More Opportunities and Challenges in Generative Adversarial Networks: Looking beyond the Hype |