BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Applied and Computational Analysis
SUMMARY:Greedy-LASSO\, Greedy-Net: Generalization and unro
lling of greedy sparse recovery algorithms - Sina
Mohammadtaheri
DTSTART;TZID=Europe/London:20240502T150000
DTEND;TZID=Europe/London:20240502T160000
UID:TALK215536AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/215536
DESCRIPTION:Sparse recovery generally aims at reconstructing a
sparse vector\, given linear measurements perform
ed via a mixture (or sensing) matrix\, typically u
nderdetermined. Greedy (and thresholding) sparse r
ecovery algorithms have known to serve well as a s
uitable alternative for convex optimization techni
ques\, in particular in low sparsity regimes. In t
his talk\, I take orthogonal matching pursuit (OMP
) as an example\, and establish a connection betwe
en OMP and convex optimization decoders in one sid
e and neural networks on the other side. To achiev
e the former\, we adopt a loss function-based pers
pective and propose a framework based on OMP that
leads to greedy algorithms for a large class of lo
ss functions including the well-known (weighted-)L
ASSO family\, with explicit formulas for the choic
e of the ``greedy selection criterion". We show nu
merically that these greedy algorithms inherit pro
perties of their ancestor convex decoder. In the s
econd part of the talk\, we leverage ``softsoring"
\, to resolve the non-differentiability issue of O
MP due to (arg)sorting\, in order to derive a diff
erentiable version of OMP that we call ``Soft-OMP"
\, which we demonstrate numerically and theoretica
lly that is a good approximation for OMP. We then
unroll iterations of OMP onto layers of a neural n
etwork with weights as semantic trainable paramete
rs that capture the structure within the data. Doi
ng so\, we also connect our approach to learning w
eights in weighted sparse recovery. I will conclud
e the talk by presenting implications of our frame
work for other greedy algorithms such as CoSaMP an
d IHT\, and highlight some open problems. This is
joint work with Simone Brugiapaglia (Concordia Uni
versity) and Matthew Colbrook (University of Cambr
idge).
LOCATION:Centre for Mathematical Sciences\, MR12
CONTACT:Matthew Colbrook
END:VEVENT
END:VCALENDAR