Learning approaches have recently becom e very popular in the field of inverse problems. A large variety of methods has been established in recent years\, ranging from bi-level learning to h igh-dimensional machine learning techniques. Most learning approaches\, however\, only aim at fittin g parametrised models to favourable training data whilst ig- noring misfit training data completely. In this talk\, we fol- low up on the idea of lear ning parametrised regularisation functions by quot ient minimisation. We consider one- and higher-dim ensional filter functions to be learned and allow for fit- and misfit-training data consisting of mu ltiple func- tions. We first present results resem bling behaviour of well- established derivative-ba sed sparse regularisers like total variation or hi gher-order total variation in one-dimension. Then\ , we introduce novel families of non-derivative-ba sed regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones. LOCATION:Seminar Room 1\, Newton Institute CONTACT:INI IT END:VEVENT END:VCALENDAR