BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Machine Learning @ CUED
SUMMARY:Double Feature: Optimal Precoding for MIMO and Div
ergence Estimation for Continuous Distributions -
Dr Fernando Perez-Cruz (Princeton)
DTSTART;TZID=Europe/London:20080716T140000
DTEND;TZID=Europe/London:20080716T150000
UID:TALK12799AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/12799
DESCRIPTION:Optimal Linear Precoding for Multiple-Input Multip
le-Output Gaussian Channels with Arbitrary Inputs\
n\nWe investigate the linear precoding policy that
maximizes the mutual information for general mult
iple-input multiple-output (MIMO) Gaussian channel
s with arbitrary input distributions\, by capitali
zing on the relationship between mutual informatio
n and minimum mean-square error. The optimal linea
r precoder can be computed by means of a fixed-poi
nt equation as a function of the channel and the i
nput constellation. We show that diagonalizing the
channel matrix does not maximize the information
transmission rate for nonGaussian inputs. A non-di
agonal precoding matrix in general increases the i
nformation transmission rate\, even for parallel n
on-interacting channels. Finally\, we also investi
gate the use of correlated input distributions\, w
hich further increase the transmission rate for lo
w and medium-snr ranges.\n\n\nEstimation of Inform
ation Theoretic Measures for Continuous Random Var
iables\n\nWe analyze the estimation of information
theoretic measures of continuous random variables
such as: differential entropy\, mutual informatio
n or Kullback-Leibler divergence. The objective of
this paper is two-fold. First\, we prove that the
information theoretic measure estimates using the
k-nearest-neighbor density estimation with fixed
k converge almost surely\, even though the k-neare
st-neighbor density estimation with fixed k does n
ot converge to its true measure. Second\, we show
that the information theoretic measure estimates d
o not converge for k growing linearly with the num
ber of samples. Nevertheless\, these nonconvergent
estimates can be used for solving the two-sample
problem and assessing if two random variables are
independent. We show that the two-sample and indep
endence tests based on these nonconvergent estimat
es compare favorably with the maximum mean discrep
ancy test and the Hilbert Schmidt independence cri
terion\, respectively.\n
LOCATION:Engineering Department\, CBL Room 438
CONTACT:Zoubin Ghahramani
END:VEVENT
END:VCALENDAR