University of Cambridge > Talks.cam > Machine Learning @ CUED > Double Feature: Optimal Precoding for MIMO and Divergence Estimation for Continuous Distributions

Double Feature: Optimal Precoding for MIMO and Divergence Estimation for Continuous Distributions

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Optimal Linear Precoding for Multiple-Input Multiple-Output Gaussian Channels with Arbitrary Inputs

We investigate the linear precoding policy that maximizes the mutual information for general multiple-input multiple-output (MIMO) Gaussian channels with arbitrary input distributions, by capitalizing on the relationship between mutual information and minimum mean-square error. The optimal linear precoder can be computed by means of a fixed-point equation as a function of the channel and the input constellation. We show that diagonalizing the channel matrix does not maximize the information transmission rate for nonGaussian inputs. A non-diagonal precoding matrix in general increases the information transmission rate, even for parallel non-interacting channels. Finally, we also investigate the use of correlated input distributions, which further increase the transmission rate for low and medium-snr ranges.

Estimation of Information Theoretic Measures for Continuous Random Variables

We analyze the estimation of information theoretic measures of continuous random variables such as: differential entropy, mutual information or Kullback-Leibler divergence. The objective of this paper is two-fold. First, we prove that the information theoretic measure estimates using the k-nearest-neighbor density estimation with fixed k converge almost surely, even though the k-nearest-neighbor density estimation with fixed k does not converge to its true measure. Second, we show that the information theoretic measure estimates do not converge for k growing linearly with the number of samples. Nevertheless, these nonconvergent estimates can be used for solving the two-sample problem and assessing if two random variables are independent. We show that the two-sample and independence tests based on these nonconvergent estimates compare favorably with the maximum mean discrepancy test and the Hilbert Schmidt independence criterion, respectively.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity