BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics
SUMMARY:Long Story Short: Omitted Variable Bias in Causal
Machine Learning - Victor Chernozhukov (MIT)
DTSTART;TZID=Europe/London:20221007T160000
DTEND;TZID=Europe/London:20221007T170000
UID:TALK182714AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/182714
DESCRIPTION:We derive general\, yet simple\, sharp bounds on t
he size of the omitted variable bias for a broad c
lass of causal parameters that can be identified a
s linear functionals of the conditional expectatio
n function of the outcome. Such functionals encomp
ass many of the traditional targets of investigati
on in causal inference studies\, such as\, for exa
mple\, (weighted) average of potential outcomes\,
average treatment effects (including subgroup effe
cts\, such as the effect on the treated)\, (weight
ed) average derivatives\, and policy effects from
shifts in covariate distribution -- all for genera
l\, nonparametric causal models. Our construction
relies on the Riesz-Frechet representation of the
target functional. Specifically\, we show how the
bound on the bias depends only on the additional v
ariation that the latent variables create both in
the outcome and in the Riesz representer for the p
arameter of interest. Moreover\, in many important
cases (e.g\, average treatment effects and aveara
ge derivatives) the bound is shown to depend on ea
sily interpretable quantities that measure the exp
lanatory power of the omitted variables. Therefore
\, simple plausibility judgments on the maximum ex
planatory power of omitted variables (in explainin
g treatment and outcome variation) are sufficient
to place overall bounds on the size of the bias. F
urthermore\, we use debiased machine learning to p
rovide flexible and efficient statistical inferenc
e on learnable components of the bounds. Finally\,
empirical examples demonstrate the usefulness of
the approach.\n\nLink to paper: https://www.nber.o
rg/papers/w30302
LOCATION:MR5\, Centre for Mathematical Sciences
CONTACT:Qingyuan Zhao
END:VEVENT
END:VCALENDAR