COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Structure in tensor-variate data: a trivial byproduct of simpler phenomena?
Structure in tensor-variate data: a trivial byproduct of simpler phenomena?Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr R.E. Turner. As large tensor-variate data become increasingly common across machine learning and statistics, complex analysis methods for these data similarly increase in prevalence. Such a trend offers the opportunity to understand subtler and more meaningful features of the data that, ostensibly, could not be studied with simpler datasets or simpler methodologies. While promising, these advances are also perilous: novel analysis techniques do not always consider the possibility that their results are in fact an expected consequence of some simpler, already-known feature of simpler data. For example, suppose one fits a time series model (e.g. Kalman Filter or multivariate GARCH ) to data indexed by time, measurement dimension, and experimental sample. Was a particular model fit achieved simply because the data was temporally smooth, and/or had correlated dimensions (or samples)? I will present two works that address this growing problem, the first of which uses Kronecker algebra to derive a tensor-variate maximum entropy distribution that has user-specified moments along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of certain observed structure in data. In the second part, I will discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling, and I will use this method in a texture synthesis application. John P. Cunningham is an associate professor in the Department of Statistics at Columbia University. He received a B.A. in computer science from Dartmouth College, and a M.S. and Ph.D. in electrical engineering from Stanford University, and he completed postdoctoral work in the Machine Learning Group at the University of Cambridge. His research group at Columbia investigates several areas of machine learning and statistical neuroscience. http://stat.columbia.edu/~cunningham/ This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsEngineering - Dynamics and Vibration Tea Time Talks Meeting the Challenge of Healthy Ageing in the 21st Century Peterhouse Theory GroupOther talksWelcome and Introduction Gaussian distributions in symmetric spaces: novel tools for statistical learning with covariance matrices Macrophage-derived extracellular succinate licenses neural stem cells to suppress chronic neuroinflammation Production Processes Group Seminar - "Evanescent Field Optical Tweezing for Synchrotron X-Ray Crystallography" CANCELLED: The Impact of New Technology on Transport Planning What is the History of the Book? |