COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > A rough guide to the Aldous-Hoover representation theorem for exchangeable arrays
A rough guide to the Aldous-Hoover representation theorem for exchangeable arraysAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Konstantina Palla. Exchangeable arrays can be used to model networks, graphs, collaborative filtering, etc. That said, my goal will be to present the essential structure and ideas underlying Aldous’s proof of his representation theorem for exchangeable arrays. Disclaimer 1: no data or applications will be harmed in this presentation. Disclaimer 2: I wouldn’t expect anyone unfamiliar with the proof to get much out of this unless they pay close attention and ask questions frequently. A sequence of random variables X1, X2, ..., is exchangeable if its distributions is invariant to permutation of any finite number of indices. A classic result by de Finetti says that such sequences are conditionally i.i.d.: informally, we can invent a random variable alpha, such that knowing the value of alpha, each element has a distribution that depends only on alpha and the elements are independent from each other. For binary sequences, one such alpha turns out to be the limiting ratio of 1’s in the sequence, and P(X1|alpha) is Bernoulli with probability alpha. In 1981, David Aldous investigated arrays X_ij, (i,j = 1,2,...) of random variables whose distributions are invariant to permutations of the rows and permutations of the columns. This is an appropriate symmetry to have if the rows and columns represent, say, objects and, a priori, all objects are created equal (hence, the ordering of the rows and columns was arbitrary). In his paper “Representations for partially exchangeable arrays of random variables”, Aldous shows that RCE arrays have conditionally independent entries and, in particular, can be written in the form
for some (measurable) function f and i.i.d. uniform random variables \alpha, \xi_i, \eta_j, \lambda_ij (i,j=1,2,...). It may be a complete mystery how such a result could be proven. I’m hoping to give some insight into this by going through the proof and answering questions to the best of my ability. The paper is:
See also my marginalia:
This talk is part of the Machine Learning Reading Group @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsContinuity in Education and Cultural links between countries: Brexit Talk and Q&A Global Food Security at Cambridge Chemical Engineering and Biotechnology occasional seminarsOther talksSingle Cell Seminars (November) Beacon Salon #7 Imaging Far and Wide Positive definite kernels for deterministic and stochastic approximations of (invariant) functions Treatment Centre Simulation Simulating Neutron Star Mergers Fumarate hydratase and renal cancer: oncometabolites and beyond Investigating the Functional Anatomy of Motion Processing Pathways in the Human Brain Picturing the Heart in 2020 The Global Warming Sceptic Disease Migration Diagnostics and patient pathways in pancreatic cancer |