|COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring.|
Nonlinear ICA using temporal structure: a principled framework for unsupervised deep learning
If you have a question about this talk, please contact Microsoft Research Cambridge Talks Admins.
This talk has been canceled/deleted
Unsupervised learning, in particular learning general nonlinear representations, is one of the deepest problems in machine learning. Estimating latent quantities in a generative model provides a principled framework, and has been successfully used in the linear case, e.g. with independent component analysis (ICA) and sparse coding. However, extending ICA to the nonlinear case has proven to be extremely difficult: A straight-forward extension is unidentifiable, i.e. it is not possible to recover those latent components that actually generated the data. Here, we show that this problem can be solved by using temporal structure. We formulate two generative models in which the data is an arbitrary but invertible nonlinear transformation of time series (components) which are statistically independent of each other. Drawing from the theory of linear ICA , we formulate two distinct classes of temporal structure of the components which enable identification, i.e. recovery of the original independent components. We show that in both cases, the actual learning can be performed by ordinary neural network training where only the input is defined in an unconventional manner, making software implementations trivial. We can rigorously prove that after such training, the units in the last hidden layer will give the original independent components. [With Hiroshi Morioka, published at NIPS2016 and AISTATS2017 .]
This talk is part of the Frontiers in Artificial Intelligence Series series.
This talk is included in these lists:
This talk is not included in any other list
Note that ex-directory lists are not shown.
Other listsjcu21's list Vascular Biology Research Seminars Hebrew Open Classes
Other talksLocal testability in group theory I Coordination Self-Assembly: From the Origins to the Latest Advances Do changes in Subjective Probability Distributions reflect a Prediction Error driven learning process? Workshop: Dissemination of systematic reviews Text mining for public health reviews (The Robot Analyst) The Art of Making Others Smile