COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Machine Learning @ CUED > Double Feature: Optimal Precoding for MIMO and Divergence Estimation for Continuous Distributions
Double Feature: Optimal Precoding for MIMO and Divergence Estimation for Continuous DistributionsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Zoubin Ghahramani. Optimal Linear Precoding for Multiple-Input Multiple-Output Gaussian Channels with Arbitrary Inputs We investigate the linear precoding policy that maximizes the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean-square error. The optimal linear precoder can be computed by means of a fixed-point equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A non-diagonal precoding matrix in general increases the information transmission rate, even for parallel non-interacting channels. Finally, we also investigate the use of correlated input distributions, which further increase the transmission rate for low and medium-snr ranges. Estimation of Information Theoretic Measures for Continuous Random Variables We analyze the estimation of information theoretic measures of continuous random variables such as: differential entropy, mutual information or Kullback-Leibler divergence. The objective of this paper is two-fold. First, we prove that the information theoretic measure estimates using the k-nearest-neighbor density estimation with fixed k converge almost surely, even though the k-nearest-neighbor density estimation with fixed k does not converge to its true measure. Second, we show that the information theoretic measure estimates do not converge for k growing linearly with the number of samples. Nevertheless, these nonconvergent estimates can be used for solving the two-sample problem and assessing if two random variables are independent. We show that the two-sample and independence tests based on these nonconvergent estimates compare favorably with the maximum mean discrepancy test and the Hilbert Schmidt independence criterion, respectively. This talk is part of the Machine Learning @ CUED series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsThe Cambridge Trust for New Thinking in Economics CUED Computer Vision Research Seminars Social Anthropology Research Associate Seminar SeriesOther talksCambridge - Corporate Finance Theory Symposium September 2017 - Day 2 Cambridge - Corporate Finance Theory Symposium September 2018 - Day 2 Intelligent Self-Driving Vehicles Cambridge-Lausanne Workshop 2018 - Day 1 Reserved for CambPlants Not Maggie's fault? The Thatcher government and the reemergence of global finance 'Honouring Giulio Regeni: a plea for research in risky environments' Knot Floer homology and algebraic methods EU LIFE Lecture - "Histone Chaperones Maintain Cell Fates and Antagonize Reprogramming in C. elegans and Human Cells" Picturing the Heart in 2020 Black and British Migration Reconstructing deep ocean circulation pathway and strength using sediment dispersion |