BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Sigma*: Symbolic Learning of Input-Output Specifications - Matko B
 otincan
DTSTART:20130114T130000Z
DTEND:20130114T140000Z
UID:TALK42394@talks.cam.ac.uk
CONTACT:Peter Sewell
DESCRIPTION:I will present Sigma*\, a novel technique for learning symboli
 c models\nof software behaviour. Sigma* targets synthesis of stream progra
 ms.\nStream programs\, consisting of interconnected filters processing\nse
 quences of data items\, often have to be optimised against a variety\nof g
 oals\, e.g.\, speed\, latency\, throughput\, etc. For instance\, a\ncompil
 er might want to place a ``cheap'' filter that drops data before\na ``cost
 ly'' one that does processing\, or fuse two filters when it is\nprofitable
  to trade pipeline parallelism for lower communication cost.\nSigma* learn
 s faithful input-output models of arbitrarily structured\nfilters by a nov
 el Angluin-style learning algorithm based on dynamic\nsymbolic execution a
 nd counterexample-guided abstraction refinement.\nInferred models\, acting
  as behavioural specifications\, can be\neffectively composed and checked 
 for equivalence. For instance\, to\ncheck whether two filters can be safel
 y reordered we check whether the\ncomposition of inferred models commutes.
  In addition\, the inferred\nmodels can be executed concurrently\, enablin
 g automated parallelisation\nof irregular loop nests that are not polyhedr
 al\, disjoint or with a\nstatic pipeline structure. In preliminary experim
 ents\, in addition to\nenabling optimisations of several real-world stream
  benchmark programs\nnot possible with current tools\, Sigma*-generated mo
 dels yielded on\naverage three-fold speedups over the most optimized gcc-g
 enerated code\nby synthesising SIMD code using Intel SSE intrinsics (for s
 tateless\nmodels) and GPU code using CUDA (for stateful models).\n
LOCATION:FW26
END:VEVENT
END:VCALENDAR
