COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |

University of Cambridge > Talks.cam > Statistics > Adaptive Piecewise Polynomial Estimation via Trend Filtering

## Adaptive Piecewise Polynomial Estimation via Trend FilteringAdd to your list(s) Download to your calendar using vCal - Ryan J. Tibshirani, Carnegie Mellon University
- Friday 30 May 2014, 16:00-17:00
- MR12, Centre for Mathematical Sciences, Wilberforce Road, Cambridge.
If you have a question about this talk, please contact . We discuss trend filtering, a recently proposed tool of Kim et al. (2009) for nonparametric regression. The trend filtering estimate is defined as the minimizer of a penalized least squares criterion, in which the penalty term sums the absolute kth order discrete derivatives over the input points. Perhaps not surprisingly, trend filtering estimates appear to have the structure of kth degree spline functions, with adaptively chosen knot points (we say “appear” here as trend filtering estimates are not really functions over continuous domains, and are only defined over the discrete set of inputs). This brings to mind comparisons to other nonparametric regression tools that also produce adaptive splines; in particular, we compare trend filtering to smoothing splines, which penalize the sum of squared derivatives across input points, and to locally adaptive regression splines (Mammen & van de Geer 1997), which penalize the total variation of the kth derivative. Empirically, trend filtering estimates adapt to the local level of smoothness much better than smoothing splines, and further, they exhibit a remarkable similarity to locally adaptive regression splines. Theoretically, (suitably tuned) trend filtering estimates converge to the true underlying function at the minimax rate over the class of functions whose kth derivative is of bounded variation. The proof of this result follows from an asymptotic pairing of trend filtering and locally adaptive regression splines, which have already been shown to converge at the minimax rate (Mammen & van de Geer 1997). At the core of this argument is a new result tying together the fitted values of two lasso problems that share the same outcome vector, but have different predictor matrices. This talk is part of the Statistics series. ## This talk is included in these lists:- All CMS events
- All Talks (aka the CURE list)
- CMS Events
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Chris Davis' list
- DPMMS Lists
- DPMMS info aggregator
- DPMMS lists
- Guy Emerson's list
- Hanchen DaDaDash
- Interested Talks
- MR12, Centre for Mathematical Sciences, Wilberforce Road, Cambridge
- Machine Learning
- School of Physical Sciences
- Statistical Laboratory info aggregator
- Statistics
- Statistics Group
- bld31
- custom
- rp587
Note that ex-directory lists are not shown. |
## Other listsELCF - Engineering for a Low Carbon Future (seminar series) EDC Seminars CISA Panel on Political Parties## Other talksSt Catharine’s Political Economy Seminar - ‘Global Imbalances and Greece's Exit from the Crisis’ by Dimitrios Tsomocos Dame Ottoline Leyser: Plant Development Chemical genetic approaches to accelerate antimalarial target discovery Populism and Central Bank Independence Kolmogorov Complexity and Gödel’s Incompleteness Theorems The quasi-stationary nature of ‘steady-state’ cyclic deformation |