It is hard to be strongly faithful
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact .
Many algorithms for inferring causality are based on partial correlation testing. Partial correlations define hypersurfaces in the parameter space of a directed Gaussian graphical model. The volumes obtained by bounding partial correlations play an important role for the performance of causal inference algorithms. By computing these volumes we show that the so-called “strong-faithfulness assumption”, one of the main constraints of many causal inference algorithms, is in fact extremely restrictive, implying fundamental limitations for these algorithms. We then propose an alternative method that involves finding the permutation of the variables that yields the sparsest DAG . In the Gaussian setting, our sparsest permutation (SP) algorithm boils down to determining the permutation with sparsest Cholesky decomposition of the inverse covariance matrix. We prove that the constraints required for our SP algorithm are strictly weaker than strong-faithfulness and are necessary for any causal inference algorithm based on conditional independence testing.
This talk is part of the Statistics series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|