Estimating entropy rates with confidence intervals
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Taylan Cemgil.
The entropy rate quantifies the amount of uncertainty or disorder produced
by any dynamical system. In a spiking neuron, this uncertainty
translates into the amount of information potentially encoded and thus
the subject of intense theoretical and experimental investigation.
Estimating
this quantity in observed, experimental data is difficult and requires
a judicious selection of probabilistic models, balancing between two
opposing
biases.We use a model weighting principle originally developed
for lossless data compression, following the minimum description length
principle. This weighting yields a direct estimator of the entropy rate,
which, compared to existing methods, exhibits significantly less bias and
converges faster in simulation.With Monte Carlo techinques,we estimate
a Bayesian confidence interval for the entropy rate. In related work, we
apply
these ideas to estimate the information rates between sensory stimuli
and neural responses in experimental data.
This talk is part of the Signal Processing and Communications Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|