Constructing temporal latent spaces: Representation learning for clustering and imputation on time series
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Dr R.E. Turner.
Time series data are common in many domains, for instance in medicine or finance. Their visualization and interpretation is often crucial for important decision tasks. However, real-world time series pose many challenges, such as noise, high dimensionality, and missingness. Many time-tested machine learning models, such as self-organizing maps (SOMs) and Gaussian processes (GPs), impose certain assumptions on their input data that are not satisfied by real-world time series. We thus propose to learn a representation of the time series in a latent space where these assumptions are satisfied, such that the target models can be fit to this representation. Moreover, one can often train the representation learner and the target model end-to-end to yield optimal performance. In this talk, we are going to present two examples of this approach, with SOMs and GPs as target models. We are going to lay out the conceptual background and present empirical evidence on benchmark data sets and real-world medical time series.
This talk is part of the Machine Learning @ CUED series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|