COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Probability paradigms in Uncertainty Quantification
Probability paradigms in Uncertainty QuantificationAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact INI IT. UNQ - Uncertainty quantification for complex systems: theory and methodologies Probability theory was axiomatically built on the concept of measure by A. Kolmogorov in the early 1930s, giving the probability measure and the related integral as primary objects and random variables, i.e. measurable functions, as secondary. Not long after Kolmogorov´s work, developments in operator algebras connected to quantum theory in the early 1940s lead to similar results in an approach where algebras of random variables and the expectation functional are the primary objects. Historically this picks up the view implicitly contained in the early probabilistic theory of the Bernoullis. This algebraic approach allows extensions to more complicated concepts like non-commuting random variables and infinite dimensional function spaces, as it occurs e.g. in quantum field theory, random matrices, and tensor-valued random fields. It not only fully recovers the measure-theoretic approach, but can extend it considerably. For much practical and numerical work, which is often primarily concerned with random variables, expections, and conditioning, it offers an independent theoretical underpinning. In short words, it is “probability without measure theory”. This functional analytic setting has strong connections to the spectral theory of linear operators, where analogies to integration are apparent if they are looked for. These links extend to the concept of weak distribution in a twofold way, which describes probability on infinite dimensional vector spaces. Here the random elements are represented by linear mappings, and factorisations of linear maps are intimately connected with representations and tensor products, as they appear in numerical approximations. This conceptual basis of vector spaces, algebras, linear functionals, and operators gives a fresh view on the concepts of expectation and conditioning, as it occurs in applications of Bayes´s theorem. The problem of Bayesian updating will be sketched in the context of algebras via projections and mappings. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsTheory of Condensed Matter Mott Colloquium Making Refuge: Creative Responses to the Refugee CrisisOther talksIntrinsically Motivating Teachers;STIR's use of Data Driven Insight to Iterate, Pivot and (where necessary) Fail Fast Making a Crowdsourced Task Attractive: Measuring Workers Pre-task Interactions Nonstationary Gaussian process emulators with covariance mixtures Glanville Lecture 2017/18: The Book of Exodus and the Invention of Religion Satellite Observations for Climate Resilience and Sustainability Stokes-Smoluchowski-Einstein-Langevin theory for active colloidal suspensions |